User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for January 18, 2007

lca2007: Christopher Blizzard

[Jeff Waugh and Chris Blizzard] The keynote speaker on the second day of linux.conf.au 2007 was Christopher Blizzard, currently with Red Hat. His topic was "relevance," and, in particular, the relevance of the free software movement to the rest of the world.

One way to be relevant is to create top-quality products. There was an emphasis on the word "product," rather than "project"; Chris was talking about making things for people. The best products, he says, are those which genuinely change the way we live. The example he used was cellular telephones, which have truly changed the ways in which people communicate. Your editor, often reduced to communicating with his children via text message, is not convinced that all these changes are for the better, but the talk did not address this side of things.

The next slide was a marketing shot of the iPhone. Is this a product which will change how people live? Nobody in the audience was willing to argue that it was.

Then came Firefox - a project which Chris worked on for some years. Firefox "makes the web less annoying," and makes a point of respecting its users, which is important. It's still not clear that Firefox has changed the way people live, however. Even so, Firefox had some lessons to offer:

  • You can't change the web from the back end. No matter how much good and innovative work is being done on the server side, the software which controls the user experience will shape the web. Firefox has been successful because it is "driving from the front," and influencing how the users see and work with the net.

  • Going direct to users is important; you can't count on others to distribute your software for you.

  • Stick to your core values. The Mozilla project gets significant amounts of money from its sponsors, but it is unwilling to consider sponsorships which would require user-hostile changes.

  • Have a mission. A project will only produce a great project if it has a strong idea of what it is trying to accomplish.

How many years, asked Chris, has it been the year of the Linux desktop? Is Linux relevant for desktop users. In general, his answer was "no." Linux is showing up in interesting places, however: the Nokia N800, telephones, and the One Laptop Per Child project.

OLPC, says Chris, truly is a relevant project which will be changing lives. It has a well-defined mission - providing computing technology in a way which furthers the education of children in the developing world - and it is creating a product which furthers that mission. To that end, a number of interesting innovations have been made; these include the OLPC display (which, among other things, is readable in full sunlight), the mesh networking feature, and the ability to power it with a hand-operated generator. The sugar user interface also rates high on the list; it has tossed out much of the standard desktop metaphor in favor of a new design aimed at the OLPC's target user base.

So, based on this, how should a project make itself relevant? Chris suggests:

  • Find an important set of clients, and work toward their needs. In the OLPC example, the clients are developing-world children (or, perhaps, the governments which represent them).

  • Find good designers and trust them. Free software developers are often dismissive of the need for good design, but you cannot create a great product without it. Once you have found people who can do this design, you must trust them, even if their work takes you in directions which are surprising and unfamiliar.

  • Make your product for other people. Doing so requires developing a certain amount of empathy for the intended clients and getting past the "itch scratching" mode of development.

A project which follows these guidelines, says Chris, has a good chance of being relevant well into the future.

Comments (9 posted)

Interview with Second Life's Cory Ondrejka

January 17, 2007

This article was contributed by Glyn Moody

Cory Ondrejka, CTO of Linden Lab, has some serious programming credentials. Before joining Linden Lab in late 2000, he worked on US government projects and Nintendo games; as well as writing much of the original core code for Second Life, he also designed the Linden Scripting Language (LSL), and wrote the LSL execution engine. He talks to Glyn Moody about the background to Linden Lab's decision to take the Second Life client open source, how things will work in practice, and what's going to happen server-side.

When did Linden Lab start to think about the possibility of opening up Second Life's source code?

We've been thinking about it fairly seriously for, gosh, nearly three years now. The effort to really get there is something that got kicked off pretty early in 2006.

Was there any particular stimulus at that time?

We started looking at what our residents were doing in preparation for some speaking we did at [O'Reilly's] ETech in March. One of the things that we discovered was that a very large percentage of our residents – something on the order of 15% of people who logged in - were using the scripting language. So you start realising that there are tens of thousands of people at least, probably more like hundreds of thousands at this point, who have written code related to Second Life. And so it seems a little bit silly to not enable that creative horsepower to be applied to our code as well.

Was the decision to open the viewer's code a difficult one?

I think internally, as an organisation, buying into the idea is something that we were able to get to relatively quickly. People sometimes don't realise that the kind of work you have to do to be able to open source is exactly the same work that you're doing to close exploits and fix bugs. It's actually not a separate set of tasks in many ways.

Over 2006 there was also a very active reverse-engineering effort called libsecondlife that has something like 50 or 60 developers on their mailing list. They've been doing a very impressive job of reverse engineering the protocols and figuring out what's going on. They were finding exploits quite regularly and doing a good job of sending them to us, and saying: Hey, we found this, you guys might want to fix it.

What we found, of course, is that it doesn't really matter whether we open source or not, the exploits are going to get found - that's what has happened in all software. And so why not make it easier for folks like libsecondlife, if they're going to be poking around anyway? Let them have the code so that they're more likely be able to fix things that they find, and broaden it to a larger community of developers than just the developers who wanted to get involved in a reverse-engineering effort.

Why did you choose GNU GPLv2 license for the code?

We ended up talking about that a lot. We were basically surveying what license is still the dominant license in the open source community: it's GPLv2, and so in our minds it has a lot of legitimacy. It's also the one that gives us the most flexibility down the road, where if we want to do a dual-licensing scheme, or a more-than-dual licensing scheme, it's a lot easier to come from GPL than sort of back into it.

In fact, you already offer a commercial license, I believe?

We do. I think that for now we would be sort of surprised if a lot of people jumped on the commercial license today, but we have a lot to learn. This is a very big step: there's never been a product that was in the dominant position that then open sourced. Open source is usually used by folk who are either trying to gain market share, or projects that are very early stage. So in that sense, we're trying to be pretty careful and conservative in our decision-making process, because this is in some ways new ground. Much like three years ago, when we gave intellectual property rights back to our residents, and allowed them to own what they made, that was a very new step in this space, and so I think we're continuing the tradition of bleeding edge in our decision making.

When did you start the detailed preparatory work, and what did that entail in terms of preparing the viewer code for release?

It really got started in May and that process continued until the release. It was everything from doing external security audits, hiring additional staff, making sure that you could build it on all the platforms, and building the manifests for all the zip and tarballs we were going to distribute.

Did you have to do much in terms of making the code more legible or more modular?

I think we haven't done as much of that as we would like. Now, of course, nobody who has actually written code and then released it ever thinks the code is clean or modular enough; in fact there are pretty big changes coming down the pike to make the code better.

And that was a pretty active topic of debate: do we wait until after those changes to release the code? We decided that it made more sense to get the code out there. You can always find reasons not to open source, and ultimately it's better to let people begin getting expertise in the code even if we warn them: Hey, this part of the code is going to be changing. And what's neat is that less than 24 hours after we put the code out we've already accepted a user patch.

Could you say a little about these big changes that are coming through?

What we need is to be able not to have to update monolithically. Right now, we take down the grid, we update everybody's viewer, and everything comes back up. And obviously that's neither scalable nor testable. And so there's this long series of changes to be made to let us upgrade in a more heterogeneous way. And we are beginning to publish what those changes are going to be so that people know that they're coming and what to expect.

What are the things that you haven't been able to open source?

Well, for example, streaming textures in Second Life use the JPEG2000 compression standard, j2c, and we use a proprietary bit of code to do the decompression. Now libjpeg, which is the open source version of this, does j2c, but it's way too slow. So one of our first challenges to our user base is: Hey, go smack libjpeg around a bit, and optimise it and then we will happily swap it in.

Why do you distribute binary copies of libraries that are almost certain to be found on any GNU/Linux system -- zlib and ogg/vorbis, for example?

It just seems simpler to give people really complete sets and say: If you go through these steps you will build successfully. There are few things more frustrating than getting all excited about getting some code and you go to build it and it barfs. So we've really been trying to take steps to make sure that doesn't happen. Within about an hour and a half of us putting the code up, there was a picture up on Flickr of somebody who had compiled and made a change already.

In terms of the timing, Linden Lab's been very circumspect in talking about this move: the signals were later this year rather than at the beginning. Why is it happening now, much earlier than you originally indicated?

Linden Lab has always been probably more open than is good for us about what we're trying to do when. We have always talked about features that we're working on, and given estimates of when we were trying to release them. Like most software, we usually end up being a little bit later on those than we'd like to be. And so going forward, we're trying to do a better job of underpromising and overdelivering rather than the opposite. So if people get mad at me because I deliver stuff faster than I was going to, I think I can live with that. I like to beat expectations from here on out.

What do you hope to gain from open sourcing the viewer?

First of all, we expect to get a better viewer. We think we will do a better job of finding bugs and exploits with the Second Life community looking at the code. If you go out medium to longer term, I think we will see active feature development as the community gains expertise with the code and we continue to implement protocol changes to make it easier to implement the features. More importantly, I think we're going to be building expertise in running an open source project because this is just step one for us in terms of where we think Second Life needs to go.

Second Life is growing very rapidly at this point. We think that it is a Web-scale project, not a game-scale project. We will not be happy if at the end of the day we only have ten million users; I think we would all see that as a tremendous failure. So, if we're going to scale to Web levels, obviously we need to keep open-sourcing the pieces that make sense to open source. In order to do that, we need to build expertise at running open source projects, and being part of open source projects, and engaging the open source community. So we've taken the piece that we were first able to do that with, and we're going to learn a lot over the next couple of quarters.

Were you surprised by the large number of positive comments on the blog posting that announced the move?

There's no question that the Second Life community is the most creative, capable, intelligent, community ever targeted on one project in history. To give them the ability to make the project even more their own - it does not surprise me that they're pretty psyched about that.

What are the resources that you've put in place to work with the community that you hope to build around the code?

Right now, we basically have an army of one, Rob Lanphier, who did this before. He was at RealNetworks, and spearheaded the Real server open source project Helix.

What's he going to be doing, and how will the code submissions be processed?

He is going to be helping to hire a team, because we're eventually going to need a whole team to be just managing the ingress of code. Right now, he help set up JIRA, the project management software, which the users can register on and submit bugs and patches. They have a wiki for the open source project, and he has been pretty much managing that.

The QA team is also directly plugged in to the patch submission process so that they can pull patches in, test them on private set-ups, see what's going on. The developers will be keeping an eye on things as well. Like a lot of what Linden Lab does, it's going to be a relatively diffuse project.

You mentioned JIRA for issue tracking, what about the actual code management?

We use Subversion. There isn't yet a public Subversion repository, but we're getting there.

Will you be giving accounts on that to outside contributors?

I don't know exactly what Rob's plan is for that, but I would assume that there's going to be something like that. I expect the libsecondlife people will have a Subversion repository up before we do anything, anyway. They may host the code also -- they're pretty aggressive about doing that.

To foster external contributions, how about moving to a plug-in architecture?

I think that all of us agree that a plug-in structure on the client makes sense. It's just a matter of figuring out whether we want to leverage an existing one or re-invent the wheel.

You've indicated that you view opening up the client as a learning experience for open-sourcing the server in the right way: what additional issues will you need to address here - presumably the proprietary Havok physics engine is going to be a problem?

Certainly, there is the question of proprietary code. We may be able to do exactly what we did on the client side, where we are distributing binaries. In six months, when this [move to open up the client] is successful, it may make for very interesting conversations with folks. We can say: Hey, look, you are the leader in this sector, you should open source, here's why we did it and it worked. And I think the fact that there aren't any proof-points of that is maybe part of what scares companies from doing that. I think we're going to be a very interesting test case.

Obviously the server raises a host of security issues. We have a roadmap that we think solves those, and we're going to be sharing that roadmap sometime this quarter with the community, once we get it sufficiently refined that we're happy with it. We see a host of use-cases for servers where we need to make some pretty profound architectural changes in terms of how trust is established between user and server, between servers and each other, and servers and backend systems. But we see a path, and so it's just a matter of applying development resources to that path and moving along it.

What kind of things are you having to deal with?

In broad security terms, [it's] about code running on hostile machines. Right now, all of the simulator machines are machines that we own in our co-los. It's very different to have that code running on a machine in your garage, even though you're probably a trustworthy guy. That raises issues of trust. Once you have code running on hostile machines, it doesn't really matter whether you have the source or not: you can start doing things. And so we need to trust the simulators less, which means moving some of the things that the simulators currently do in a trusted fashion, out of them.

Does that mean centralizing certain Second Life services?

That depends. Let's say you were a large research organization and you wanted to be able at times use Second Life in a more private way. You might want to control even some of the centralized services. But what you don't want is just a fragmented set of parallel universes that can't talk to each other because you then lose the benefit that makes Second Life so strong, which is the fact that all these communities can connect across traditional geographic and community boundaries. And so the secret sauce becomes how do you architect it in a way that allows both Internet and intranet use.

Do you think that these future worlds will be part of the main Second Life geography or will there be portals from them through to your world?

Well, I think the answer is "yes", because there are some use-cases where it makes sense to be part of the big world, and other cases it makes sense to be a portal away.

Presumably you've also got to deal with issues like identity as avatars move between different worlds, and the tricky one of money?

It's almost like you've read my list: you're dead-on. What's good is, unlike six and half years ago, when we got rolling on this stuff, some of these have been partially solved by the Web. There are much better exemplars today than there were six and half years ago. And so for a lot of what we're going to be doing we can use existing technologies.

What does that imply about the convergence of 3D virtual worlds with the Web?

I think that when you look at anything in problem space, there's some things that the Web does very well. Text, it does it really well; one-to-many, it does it very well; sequential solo consumption of content, it does really well. But there are some things that shared collaborative space and virtual worlds and 3D do really well: if you need place, or you need people to be consuming the content together, where the audience matters, or knowing that we're consuming at the same time matters, or when you need simultaneous interaction.

So I think it's a little odd to imagine that either of those hammers will solve all problems. Instead, what you want is to be able take problems and move into them into the correct space. If you're doing text entry, doing it in 3D is just a big pain in the butt. So there are places for the Web, and there are places for virtual worlds, and I think what you want is as much data to flow between those two as smoothly as you can.

Finally, once you've opened up the code to the client and server, what will be left for Linden Lab to make some money from?

I think that would be a little bit like implying there's no business to be had on the Web if you give away Apache. The Web has shown us where a lot of the value is: identity, transactions, search, communities. And so nothing that we've talked about requires that Linden Lab give up any of those pieces. I think the key is for us to enable growth, building a much, much bigger market, and attempt to make money where it makes sense.

Glyn Moody writes about open source and virtual worlds at opendotdotdot.

Comments (4 posted)

The Grumpy Editor's Guide to graphical IRC clients

This article is part of the LWN Grumpy Editor series.
IRC (Internet Relay Chat) is a venerable protocol which allows people to type messages at each other across the net. Your editor remembers a fascinating day in 1991, when observers in Moscow used an IRC channel to report on the Soviet coup attempt; it was an early example of the power the net would come to have. In subsequent years, however, your editor has had little time for IRC. Getting LWN together every week requires a strong focus on getting things done, and IRC can be a real productivity killer. Pretending that IRC does not exist has been most helpful in getting the Real Work done.

Recently, however, your editor has had reason to wander into IRC again. Having not done much in this area for a while, your editor lacked a favorite IRC client - or any IRC client at all. Thus began the search for the best tool for this particular job - and, eventually, this article.

Anybody who has investigated the topic knows that there is no shortage of IRC clients to choose from. It would appear that free software developers are often afflicted with this particular itch. There is no real hope of reviewing them all, so your editor will not even try. Instead, this review is restricted to graphical clients which appear to have a real user base and which are under active development. Your editor also lacks access to AOL instant messaging, MSN messaging, etc., so this review will be focused on IRC functionality. Some clients can work with many networks; that capability will be mentioned when appropriate, but it will not be reviewed further. Finally, your editor has little to say about channel operator commands, file downloads, or other such features of IRC; this article will focus on the basics.

Gaim

[Gaim] Gaim is a longstanding GNOME messaging client. It does IRC, along with AIM, ICQ, MSN Messenger, Yahoo, Jabber, Gadu-Gadu, and so on. If it's a messaging protocol, Gaim can probably handle it. Those using it for IRC only will find that Gaim brings a certain amount of baggage ("buddy lists" and such) which is not useful in that context, and that some of the terminology used ("rooms") does not quite match the IRC conventions. None of this is particularly problematic in real use, however.

The main Gaim window is tab-oriented, with each IRC channel in its own tab. This organization is space-efficient, but it can make it hard to monitor more than one channel - though the color-coded tab tags help. Tabs can be detached, however, allowing the user to fill the screen with single-channel windows. Gaim windows use smooth scrolling, a feature your editor got tired of back in the VT100 days; unfortunately, there appears to be no easy way to turn it off. On the other hand, users can turn off the insertion of cloyingly cute smiley graphics into the message stream.

Private messages result in the quiet creation of a new tab - something which can be easy for the user to miss. In general, the handling of private messages in IRC clients seems a little awkward.

Gaim has support for IRC servers which can authenticate nicknames with passwords. It also has a plugin feature which can be used to extend the client; available plugins add support for additional protocols, expose more preference options, perform encryption, and more.

Finally, on your editor's system, the Gaim client was a huge process. It should not be that hard to create an IRC client which requires less that a 50MB resident set, but the Gaim developers have not done that. Running Gaim made the whole system visibly slower. Gaim also doesn't take the hint when all of its windows are closed; one must explicitly tell it to go away by selecting "Quit" from the "Buddies" menu in the "Buddy list" window - something your editor found less than entirely intuitive.

Konversation

[Konversation] Konversation is a KDE-based client centered around IRC. Like many KDE clients, it is feature-heavy and visually pleasing.

Like Gaim, Konversation is based on a single window with tabs. In this case, however, there does not appear to be any way to detach the tabs into their own windows. One nice feature in Konversation is "remember lines," lines drawn in each conversation window when it goes out of view. When returning to a channel, the user knows just where to start reading to catch up on the new stuff. This feature gets a little aggressive at times, drawing several lines together in low-activity channels; one presumes this little glitch can be ironed out. Konversation also has an option to suppress all of the channel event lines (comings and goings) which tend to clutter up the conversation.

Konversation can handle passwords, but it required a bit more setup work than some other clients. Also available is a "URL catcher" tab which simply accumulates URLs posted on subscribed channels.

Overall, Konversation comes across as a featureful and useful IRC client. The documentation which comes with it is well-done and comprehensive; it helped your editor get past his initial questions ("how do I make it stop joining #kde?") quickly. Detachable tabs would make it nearly perfect.

ERC

[ERC] Perhaps your editor is pushing it a bit by including ERC in this list. ERC is an emacs-based IRC client; it can be added onto emacs 21, and it has been bundled into the upcoming emacs 22 release. Emacs is a strongly graphical environment these days, and ERC offers all of the point-and-click configuration and operation options that the other clients reviewed here have.

ERC maintains a separate buffer for each open IRC channel. It tends to hide those buffers, and there is no simple tab bar for switching between them. It is a simple matter for an emacs user to configure the display as desired, with different channels displayed in different windows or frames. Somebody who is not familiar with the emacs way of doing things would have a harder time of it, however.

There is a separate buffer for managing the connection with the IRC server, and that is where private messages show up. It is probably safe to say that very few users will keep that buffer visible, with the result that private messages tend to go unnoticed. ERC also arguably features the ugliest, most unreadable channel list window of any of the clients reviewed.

Display is highly configurable. By default, ERC is less color-happy than most other graphical clients, a feature which your editor appreciates. There is a full list of options for filtering users and message types, performing text transformations, etc. And, of course, the experienced emacs user can simply attach elisp functions to any events requiring more involved customization.

There is no provision for marking the last-read text in ERC. This functionality is easily obtained by moving point off the end of the buffer, essentially saving the current location - but the user must remember to do it.

Overall, your editor likes the feel of working with ERC - but, then, he is known to be sympathetic to emacs-based solutions. There is no need to figure out how to search for specific text, for example - all of the normal text searching functions work as expected. Saving text or a partial log is straightforward. There is no one-line text window to type into; one simply types into the buffer and long lines are broken naturally. And so on. Emacs users will probably be happy with ERC; the rest of the world is unlikely to pick up emacs to be able to use it.

XChat

[XChat] XChat is a popular client with a relatively long history. Your editor tried out the GNOME version of XChat on several networks. Finding servers was relatively easy, since XChat comes equipped with a long list built into it. One thing which becomes immediately apparent, however, is that XChat grabs the channel list in a blocking operation. The client can go completely unresponsive for several minutes until the listing is complete - not the friendliest introduction possible.

The main XChat window features a tree listing of servers and open panels on the left, and a display of one of those channels in the main pane. There does not appear to be any way to view more than one channel's traffic at any given time. The left pane marks channels with unread activity - with a separate mark if the only activity is enter and leave events.

The XChat feature list is long. It has a "last read" line in each window, though how it decides when something was read remains a bit of a mystery. It is not directly related to expose, focus, or mouse button events. Those who are relatively uninterested in actually reading IRC traffic can set up window transparency and background images. There is a plugin mechanism which can used to set up a URL grabber window or to script the client in Perl or Python. Moving the pointer over a correspondent's name yields a popup with that person's name and origin information. There is no password support, however. Unlike some other clients, XChat appears to have relatively little support for channel operator functions.

Graphically, XChat is reasonably pleasing, with a use of color which is not entirely excessive. Private messages are handled in a relatively straightforward and visible way - but the dialog for selecting a user to talk to is painful. Overall, it is a capable and easy client adequate for the needs of a large subset of IRC users.

SeaMonkey

[SeaMonkey] Once upon a time, the Mozilla client looked as if it were about to grow to encompass the functionality of most other programs found on a typical desktop system. The Mozilla project eventually decided to redirect its efforts toward the more focused Firefox and Thunderbird tools, leaving the old, comprehensive application behind. There were users who did not like that state of affairs, and who dedicated some time to continuing its development. The result was the SeaMonkey project. Tucked into one corner of this tool is an IRC client.

Your editor's introduction to this tool was somewhat rocky. It offered up Undernet as one of its connection possibilities. Your editor decided to check it out and see what channels were available. After a long period where the client was completely unresponsive (attempting to list information for over 20,000 channels), it simply crashed. Note to the SeaMonkey developers: if you must crash, please have the courtesy to do so before making the user wait for a long network transfer.

When SeaMonkey is operating, it provides a single, tabbed window with nicknames on the left. There is no way to have more than one channel on-screen at a time. There is no password support. All told, the SeaMonkey IRC client ("ChatZilla") comes across as unfinished and rough compared to a number of the alternatives. Your editor has seen nothing here to convince him that web browsers need to support IRC too.

ksirc

[ksirc] Ksirc is a simple IRC client shipped with KDE; it does not appear to have a web page dedicated to it. It offers less help than many other clients; your editor's install of ksirc did not know about any IRC servers, for example. Once configured, however, it operates well enough.

The bulk of the interface is done through a single window, with each channel represented by a tab. It is possible to detach the tabs into separate windows, making it possible to see multiple windows at once. There is also a "ticker mode" where messages scroll by in a single-line window, but this mode did not render properly on your editor's system. A separate window shows the list of servers and open channels, but it does not appear to actually be useful for much.

Your editor appreciates restraint in the use of color, but ksirc, perhaps, takes the idea too far by default. The window is essentially monochromatic, dense, and difficult to read. The use of color can be configured, however, and there is a set of filters which can be used to highlight messages with text of interest. When the automatic colorizing mode is enabled, however, it has an unhealthy tendency to pick gray for some of the more active users - a bit of a pain considering that the window background is, by default, gray.

Overall, ksirc is a sufficiently capable tool for most needs. It gives the impression of having been left behind by some of the other KDE-based IRC clients, however, and of not getting much development attention in recent times.

Kopete

[Kopete] A more contemporary KDE client is Kopete. This tool, perhaps, is the KDE answer to Gaim; it appears to have support for just about any messaging protocol one can imagine. Once again, your editor only looked at the IRC functionality.

If ksirc is dense and hard to read, Kopete is the opposite. The default display is full of white space, divider bars, icons, smilies, and more. Here, too, it can be hard to follow a conversation for the simple reason that very little of it actually fits into the window. Kopete supports themes, however, and it does not take long to find a theme which makes a little better use of screen real estate.

At the outset, Kopete's interface is a bit intimidating. The small window that comes up seems to offer little in the way of interesting operations - joining a channel, say. For that, one must know to right-click on the little icon which shows up in the taskbar tray and wander through the menus. It all works fine once one gets the hang of it, but a new user trying to get started without having read the manual is likely to be frustrated for a while.

It is hard to miss private messages in Kopete - the application creates a new window and throws it at you. For the serious messaging user, there is a whole set of options for configuring just how hard the client tries to let you know about various sorts of events. About the only thing that is lacking is a "last read" line. With that in place, and with an appropriate theme, Kopete is a powerful and attractive tool.

KVirc

[KVirc] Finally, your editor tried out KVirc, which is a bit of a different approach to IRC clients. Unlike Kopete, which leaves the first-time user trying to figure out what to do, KVirc starts with a set of configuration windows - one of which even displays the GPL text for approval. The user ends up with a big window containing another for server selection. It would appear that just about every IRC server on the planet has been put into this dialog; it's a long list.

After selecting a server (and, perhaps. entering password information), the user encounters one of the more peculiar aspects of KVirc. Every channel has its own window, but all of those windows are contained within the big KVirc window. There is a background image in the big window, and the channel windows are all translucent. It is all visually striking, but your editor could not help wondering why the developers felt the need to implement their own window manager. It even has options for tiling all of the subwindows - with a choice of several different algorithms.

KVirc also has KVS, its own, special-purpose scripting language "inspired by C++, sh, perl, php and mIrc". There is a separate window for monitoring socket operations, no end of options for playing sounds, a set of anti-spam and anti-flood filters, and more. It's all powerful and striking, but it's hard to help wondering if all that brilliant development energy couldn't have gone into something more generally useful than another IRC client.

For people who spend much of their lives in IRC, KVirc might well be the tool of choice. It's visually striking, feature-rich, and users can script their own bots directly within the client. For your editor's purposes, however, KVirc is an overly heavy tool, wanting the full screen and ongoing attention.

Conclusion

Some readers will certainly note the biggest omission from this review: bitchx. It is, beyond doubt, a powerful client; bitchx was left out primarily because it is not a graphical client. Those who are determined to remain in the curses world are unlikely to be much interested in the other clients listed here, so there doesn't seem to be much point in trying to compare them.

So which client will your editor use when he wishes to be grumpy with others in real time, one line at a time? ERC probably remains at the top of the list, but XChat is also a useful and capable client. If your editor were a user of other messaging protocols as well, it would pretty much come down to Gaim or Kopete, depending on one's desktop orientation. Your editor's high-school son tends to quickly minimize windows when others walk into the room, but he would appear to have settled on Gaim.

In the end, however, just about any of these clients is adequate for the job. One cannot help but wonder why the free software community has produced such a large set of IRC clients. Yes, IRC is an important communication channel, and a well-designed client can make IRC more pleasant to work with, but it still does not seem like there would be room for that many applications doing essentially the same thing. One cannot fault developers for scratching an itch and giving the result to the world. Perhaps, once they have achieved the creation of the world-dominating IRC client, some of these developers will move on to the creation of something truly revolutionary.

Comments (37 posted)

Page editor: Rebecca Sobol

Security

Brief items

Chaostables for confusing nmap scans

January 17, 2007

This article was contributed by Jake Edge.

Chaostables is a recently released collection of code that provides a means to confuse an nmap scan. The author, Jan Engelhardt, has provided these capabilities as both netfilter modules for Linux 2.6.18-20 and as iptables rules. He has an excellent description of what he is trying to accomplish and how he does it, as well.

Utilities like nmap (described in an LWN article last year) are often used by those with malicious intent to discover available hosts, open ports, OS versions, and the like to help target their attacks. Chaostables seeks to generate confusing results to these probes. To that end, Engelhardt has derived a set of behaviors that correspond to these types of scans and a set of rules to detect and deflect them.

Since 2.4, the standard way of doing Linux packet filtering is by using the iptables utility which provides a userspace interface to the netfilter kernel modules. Netfilter provides a set of kernel hooks for examining and manipulating network packets and is the framework for Linux firewall implementations. Administrators define rules that identify particular kinds of packets and specify what to do with them; those rules are ordered and collected into chains which are then grouped into tables. All of this packet policy can then be pushed into the kernel via the iptables utility.

The chaostables rules start with dropping some ICMP packets that could reveal the existence of the host and then start concentrating on the kinds of packets sent by scanning utilities. Techniques like TCP stealth, SYN, connect and grab scans are detected and dropped to attempt to hide the host while still allowing 'real' network traffic. These rules are then rolled up into the 'portscan' netfilter module in order to reduce the complexity of the chains that need to be installed.

A second kind of chain provides ways to disguise the underlying system by making Linux appear to be another OS entirely. Network scanning utilities often try to throttle their scans when they detect a system that limits the number of ICMP or RST packets sent per second. Linux is not one of those kinds of systems, but the CHAOS chain makes it look as if it is by limiting RST and ICMP packets to two per second. It also uses the 'random' netfilter rule to generate negative responses on closed ports only some of the time. The net effect is that the scanner will get inconsistent results, sometimes ports will appear closed and sometimes not with the added bonus of potentially slowing down the scan.

The CHAOS chain can be combined with the TARPIT chain to cause ports to appear to be open when in fact they are not. This can slow down a network scan as it attempts to elicit additional information from a seemingly open port. The TARPIT chain can consume router and/or firewall resources by appearing to be an open connection, so chaostables provides the DELUDE chain. It will make ports appear to be open on an initial connect (SYN), but revert to their true closed state for any additional traffic.

Chaostables is quite an interesting use of the netfilter technology and probably uses it in ways that the authors never expected. It may be that only the most paranoid of system administrators will want to implement these chains, but they will be available if needed. In addition, the techniques and code provided in the package are very useful as examples for other applications.

Comments (3 posted)

Security reports

Phishing Attacks Continue to Grow in Sophistication (Netcraft)

Netcraft examines the latest trends in the world of Phishing. "Phishing attacks are continually evolving, as fraudsters develop new strategies and quickly refine them in an effort to stay a step ahead of banking customers and the security community. Here are some of the phishing trends and innovations we noted in 2006".

Comments (none posted)

New vulnerabilities

acroread: multiple vulnerabilities

Package(s):acroread CVE #(s):CVE-2006-5857 CVE-2007-0045 CVE-2007-0046
Created:January 11, 2007 Updated:October 26, 2009
Description: Adobes acrobat reader has the following vulnerabilities:

The Adobe Reader Plugin has a cross site scripting vulnerability that can be triggered by processes malformed URLs. Arbitrary JavaScript can be served by a malicious web server, leading to a cross-site scripting attack.

Maliciously crafted PDF files can be used to trigger two vulnerabilities, if an attacker can trick a user into viewing the files, arbitrary code can be executed with the user's privileges.

Alerts:
SuSE SUSE-SA:2009:049 acroread, 2009-10-26
Gentoo 200910-03 acroread 2009-10-25
Red Hat RHSA-2007:0021-01 acroread 2007-01-22
Gentoo 200701-16 acroread 2007-01-22
SuSE SUSE-SA:2007:011 acroread 2007-01-22
Red Hat RHSA-2007:0017-01 acroread 2007-01-11

Comments (1 posted)

bluez-utils: hidd vulnerability

Package(s):bluez-utils CVE #(s):CVE-2006-6899
Created:January 16, 2007 Updated:May 14, 2007
Description: hidd in BlueZ (bluez-utils) before 2.25 allows remote attackers to obtain control of the Mouse and Keyboard Human Interface Device (HID) via a certain configuration of two HID (PSM) endpoints, operating as a server, aka HidAttack.
Alerts:
Red Hat RHSA-2007:0065-01 bluez-utils 2007-05-14
Ubuntu USN-413-1 bluez-utils 2007-01-24
Mandriva MDKSA-2007:014 bluez-utils 2006-01-15

Comments (none posted)

horde-kronolith: local file inclusion

Package(s):horde-kronolith CVE #(s):CVE-2006-6175
Created:January 17, 2007 Updated:March 7, 2008
Description: Kronolith contains a mistake in lib/FBView.php where a raw, unfiltered string is used instead of a sanitized string to view local files. An authenticated attacker could craft an HTTP GET request that uses directory traversal techniques to execute any file on the web server as PHP code, which could allow information disclosure or arbitrary code execution with the rights of the user running the PHP application (usually the webserver user).
Alerts:
Gentoo 200701-11 horde-kronolith 2007-01-16

Comments (none posted)

kdenetwork: denial of service

Package(s):kdenetwork CVE #(s):CVE-2006-6811
Created:January 11, 2007 Updated:February 1, 2007
Description: The KsIRC 1.3.12 utility in kdenetwork is vulnerable to a remote denial of service attack that can be caused by a malicious IRC server sending a long PRIVMSG string. This causes an assertion failure and an associated NULL pointer dereference.
Alerts:
Gentoo 200701-26 ksirc 2007-01-29
rPath rPSA-2007-0007-1 kdenetwork 2007-01-15
Ubuntu USN-409-1 kdenetwork 2007-01-15
Mandriva MDKSA-2007:009 kdenetwork 2007-01-10

Comments (none posted)

libgtop2: buffer overflow

Package(s):libgtop2 CVE #(s):CVE-2007-0235
Created:January 15, 2007 Updated:August 9, 2007
Description: The /proc parsing routines in libgtop are vulnerable to a buffer overflow. If an attacker can run a process in a specially crafted long path then trick a user into running gnome-system-monitor, arbitrary code can be executed with the user's privileges.
Alerts:
Fedora FEDORA-2007-657 libgtop2 2007-08-02
Red Hat RHSA-2007:0765-01 libgtop2 2007-08-07
Debian DSA-1255-1 libgtop2 2007-01-31
rPath rPSA-2007-0014-1 libgtop 2007-01-23
Gentoo 200701-17 libgtop 2007-01-23
Mandriva MDKSA-2007:023 libgtop2 2007-01-18
Ubuntu USN-407-1 libgtop2 2007-01-15

Comments (none posted)

libneon: denial of service

Package(s):libneon CVE #(s):CVE-2007-0157
Created:January 13, 2007 Updated:January 17, 2007
Description: The URI parser in neon versions 0.26.0 through 0.26.2 has a denial of service vulnerability. Remote servers can cause a crash by sending a URI with non-ASCII characters.
Alerts:
Mandriva MDKSA-2007:013 libneon 2007-01-12

Comments (none posted)

libsoup: denial of service

Package(s):libsoup CVE #(s):CVE-2006-5876
Created:January 13, 2007 Updated:January 29, 2007
Description: The libsoup HTTP library does not sanitize input sufficiently when parsing HTTP headers. This can be exploited to cause a denial of service.
Alerts:
Fedora FEDORA-2007-109 libsoup 2007-01-29
Mandriva MDKSA-2007:029 libsoup 2006-01-26
Ubuntu USN-411-1 libsoup 2007-01-23
rPath rPSA-2007-0015-1 libsoup 2007-01-23
Debian DSA-1248-1 libsoup 2007-01-12

Comments (none posted)

oftpd: denial of service

Package(s):oftpd CVE #(s):CVE-2006-6767
Created:January 16, 2007 Updated:January 17, 2007
Description: By specifying an unsupported address family in the arguments to a LPRT or LPASV command, an assertion in oftpd will cause the daemon to abort. Remote, unauthenticated attackers may be able to terminate any oftpd process, denying service to legitimate users.
Alerts:
Gentoo 200701-09 oftpd 2007-01-15

Comments (none posted)

opera: multiple vulnerabilities

Package(s):opera CVE #(s):CVE-2007-0126 CVE-2007-0127
Created:January 13, 2007 Updated:January 17, 2007
Description: The opera browser has a heap overflow vulnerability involving the DHT markers in JPEG files. If a specially crafted JPEG files is read on a web site, arbitrary code may be executed with the privileges of the user.

Also, the createSVGTransformFromMatrix() function does not correctly handle passed-in objects, this can be used to execute arbitrary code.

Alerts:
SuSE SUSE-SA:2007:009 opera 2007-01-15
Gentoo 200701-08 opera 2007-01-12

Comments (none posted)

wget: denial of service

Package(s):wget CVE #(s):CVE-2006-6719
Created:January 11, 2007 Updated:January 23, 2007
Description: The wget http file retriever application has a problem with the ftp_syst function in ftp-basic.c. A malicious FTP server which sends a large number of blank 220 responses to the SYST command can cause wget to crash, resulting in a denial of service.
Alerts:
rPath rPSA-2007-0011-1 wget 2007-01-23
Mandriva MDKSA-2007:017 wget 2006-01-15
Fedora FEDORA-2007-043 wget 2007-01-10
Fedora FEDORA-2007-037 wget 2007-01-10

Comments (2 posted)

wordpress: multiple vulnerabilities

Package(s):wordpress CVE #(s):CVE-2006-6808 CVE-2007-0107 CVE-2007-0109
Created:January 16, 2007 Updated:January 17, 2007
Description: When decoding trackbacks with alternate character sets, WordPress does not correctly sanitize the entries before further modifying a SQL query. WordPress also displays different error messages in wp-login.php based upon whether or not a user exists. David Kierznowski has discovered that WordPress fails to properly sanitize recent file information in /wp-admin/templates.php before sending that information to a browser. An attacker could inject arbitrary SQL into WordPress database queries. An attacker could also determine if a WordPress user existed by trying to login as that user, better facilitating brute force attacks. Lastly, an attacker authenticated to view the administrative section of a WordPress instance could try to edit a file with a malicious filename; this may cause arbitrary HTML or JavaScript to be executed in users' browsers viewing /wp-admin/templates.php.
Alerts:
Gentoo 200701-10 wordpress 2007-01-15

Comments (none posted)

Page editor: Rebecca Sobol

Kernel development

Brief items

Kernel release status

The current 2.6 prepatch is 2.6.20-rc5, released by Linus on January 12. It contains a number of fixes, and might be the last -rc release before 2.6.20. No patches have hit the mainline git repository since -rc5; this situation will probably not change until after Linus returns from linux.conf.au.

The current -mm tree is 2.6.20-rc4-mm1. Recent changes to -mm include the e1000 development tree, the HID development tree, unionfs, and the asynchronous filesystem I/O patches.

Comments (none posted)

Kernel development news

LCA: The state of the Nouveau project

In any conference, there comes a time when one has to wonder what the people who do the talk scheduling were thinking. For lca2007, that moment came when your editor realized that the talks on OLPC (Jim Gettys), real [Dave Airlie] time (Ted Ts'o), and Nouveau were all scheduled together. Nouveau won out, but it was not an easy decision.

The Nouveau project is an effort to develop a set of free 3D drivers for NVidia chipsets. NVidia has long annoyed the free software community with its refusal to release free drivers or programming information for its video chipsets. The Nouveau folks have had enough of that, and they are doing something about it. Dave Airlie used his slot at linux.conf.au to talk about the project and its current status.

Nouveau got its start in February 2005, though serious work did not begin until June of that year. The project was announced at FOSDEM 2006, at which point others started to help. There are currently about six developers doing serious work on Nouveau.

The project is relying on reverse engineering for the information needed to write free drivers. To that end, the developers have put together a set of tools. At the top of the list is renouveau, which is designed to reveal the commands sent to the card in response to specific operations. Using the existing binary drivers, renouveau sets up a context, then scans the process's mappings until it finds the command FIFO. It then requests an operation and sees how the FIFO changes. With enough operations, a pretty good idea of how the adapter is programmed to specific ends can be had. This was not a trivial tool, and the better part of a year was put into its development.

Renouveau is useful for examining the FIFO, but it doesn't help with reads and writes to I/O registers. For that, there's another set of tools, starting with valgrind-mmt - a version of valgrind designed to trap I/O memory operations. Libsegfault is a modified version of mmap() which doesn't actually do the mappings as the caller would like; it traps the subsequent segmentation faults and dumps out the operations. There is another tool, called kmmio, which performs a similar task for register operations done in kernel space. Finally, the project uses a BIOS tracer which runs BIOS code in x86emu and traps I/O register accesses.

All of the information obtained from these tools is supplemented with hints from the old, free nv driver. There is also, says Dave, information "which shouldn't be there" to be found on some Russian web sites.

Where has all of this information led the project? Basic tasks, like the allocation of instance RAM and FIFO initialization, are working. Hardware context switching works - on little-endian machines. There is 2D support derived from the nv driver; it offers basic EXA and RandR 1.2 support. On the 3D front, the Mesa TCL (transform, clipping, and lighting) driver mostly works. Textures and objects do not, however. It is possible to run glxgears on nv4x chips. It has taken some time to get to this point, but Dave thinks that things will start to move a lot faster from here.

The next milestone would be to run Quake 3. That is, says Dave, an obligatory step on the roadmap. Getting there will involve texture support, a better memory manager, and better locking in the kernel DRM code. The developers (Dave in particular) are aiming for RandR 1.2 multi-head support. Once all of this is in place, the nouveau driver will have reached a reasonably capable state.

There are a lot of people asking when this will be; Dave says that the project's IRC channel is often overwhelmed by spectators looking for news. There is no wish to push the code out ahead of its time; among other things, that would nail down the API between the kernel and the X server, making things harder to change. The current hope is to have some sort alpha release toward the end of 2007.

For people wanting to help, Dave had a simple message: they need developers. There's not much for people who can't work on driver code to do at this point. Graphics drivers, he says, are not as hard as people think. Finally, he addressed the issue of the $10K pledge for the project. It rather took the developers by surprise; they had not endorsed this drive, and had held some doubts as to whether it would be successful. How the pledge money will be handled is still being worked out; it looks like it will mostly be used for hardware purchases.

Lack of support for 3D video adapters has stalled the community for years; there has been a long wait in the hope that the vendors would come to their senses. That wait is just about over. The Nouveau project (along with various others) shows that we have the resources to figure out how our hardware works, even in the face of complex devices and uncooperative vendors. It would be better if we did not have to take things into our own hands this way, but it is nice to see how well we can do it when the need arises.

Comments (42 posted)

KHB: Recovering Device Drivers: From Sandboxing to Surviving

January 12, 2007

This article was contributed by Valerie Henson

Drivers are the dominant source of crashes and bugs in operating systems. This is especially disturbing given the proportion of operating system code that is driver code. In Linux, approximately two thirds of the source lines of code are in drivers (depending on the version). Full-time kernel developers often bemoan the quality of code in drivers; one study [PDF] found that the bug rate in drivers was actually three to seven times higher than in core kernel code. Binary drivers (hopefully being phased out) are an especially nasty source of bugs. Unfortunately, the companies and programmers writing these driver have neither the expertise nor the incentive to write beautiful, clean, well-behaved drivers.

Efforts to limit the effects of driver bugs on the core operating system have been going on for decades, with limited success. One of the motivations behind microkernels was the desire to isolate parts of the kernel so that they could not, for example, stomp on the memory of other parts of the kernel. Safe behind the message passing interface between microkernel modules, each module only had to validate the input from other modules in order to ensure that external bugs would not interfere with its proper working. In reality, completely validating messages is harder than it looks, and the performance overhead of message passing, MMU tricks, and the code to work around them turned out to be prohibitive. A variety of more limited sandboxing techniques, isolating only likely troublemakers such as device drivers, reduced operating system crashes significantly, but left the system with a non-functioning, possibly crucial device (such as the network card). While the OS was still up and running, the system reliability, viewed from an application standpoint, was not particularly improved. From the point of view of a web server, a crashed system and a system with no network access due to a safely sandboxed but crashed network driver are practically identical.

The Solution

What we really need is a lightweight, unintrusive system to not only catch device driver errors, but to recover and restart the device driver while simultaneously covering for the device while it is re-initializing. Michael Swift, Muthukaruppan Annamalai, Brian Bershad, and Henry Levy implemented such a system for Linux 2.4.18, as described in their 2004 OSDI paper, Recovering Device Drivers [PDF]. The key idea in this paper is shadow drivers, a driver that wraps around the original hardware driver and records requests sent to it, monitors the health of the driver, and restarts the driver if it crashes, replaying any missed requests collected while the driver was restarting. You can think of a shadow driver as a substitute teacher temporarily filling in while the real driver is out sick. Each class of device drivers (sound, network, disk, etc.) requires the writing of only one shadow driver.

Shadow drivers are built on the Nooks driver isolation system, outlined in a paper in the 19th SOSP, Improving the Reliability of Commodity Operating Systems [PDF]. Nooks provides most of the benefits of the microkernel architecture for a relatively low cost. The four main services are (1) memory isolation - drivers run with most of the kernel memory read-only, (2) wrappers around data transfer between the kernel and drivers, (3) tracking of kernel objects used by the driver, and (4) a recovery manager. The Nooks architecture is simplified by the (perfectly reasonable) assumption that kernel modules are not malicious, but merely buggy, and so doesn't need to take special steps to, for example, prevent a device driver from deliberately altering memory permissions.

When a shadow driver detects that a device driver has failed, it begins to actively proxy for the device driver (queuing up requests, etc.) and begins recovery of the driver. First it safely shuts down the driver, which may require some delicate work given that the driver has crashed. For example, it may need to explicitly disable interrupts on the device since a crashed driver can no longer acknowledge them. Then it reloads the driver and reconfigures it. The shadow driver will have recorded any prior configuration requests (such as "set full-duplex mode") and replays them if necessary. Then it replays any queued up requests that accumulated during the recover phase. Depending on the device and type of request, it may make more sense to drop the requests; for example, a shadow driver for sound will just drop any requests to play sound, since they are real-time and aren't useful to save up to play when the driver recovers. (With the Audigy sound card and driver evaluated in the paper, this resulted in a gap in the audio of about one-tenth of a second.)

The authors compared vanilla Linux, Nooks, and shadow drivers by adding bugs by hand to three drivers, a network driver (e1000), a sound card driver (audigy), and a disk driver (ide-disk). The bugs were based on real bugs reported on mailing lists in order to be as realistic as possible. They then tested the reliability of the system from the application point of view on each system. The results are summarized in the table below; shadow drivers were able to transparently recover from all tested driver bugs which normally crash the entire machine, without interrupting the application.

Application Behavior
Device Driver Application Activity Linux-NativeLinux-NooksLinux-SD
Sound mp3 player CRASH MALFUNCTION SUCCESS
(audigy driver) audio recorder CRASH MALFUNCTION SUCCESS
speech synthesizer CRASH SUCCESS SUCCESS
strategy game CRASH MALFUNCTION SUCCESS
Network network file transfer CRASH SUCCESS SUCCESS
(e1000 driver) remote window manager CRASH SUCCESS SUCCESS
network analyzer CRASH MALFUNCTION SUCCESS
IDE compiler CRASH CRASH SUCCESS
(ide-disk driver) encoder CRASH CRASH SUCCESS
database CRASH CRASH SUCCESS

What does this mean for Linux?

Linux developers have a number of ways to reduce the impact of buggy drivers. Given that the limiting factor is usually human eyeball time, we should choose methods that rely on automation as much as possible. Some of these methods are automatic bug checking, compiler-level checks, and code-level asserts. Adding an automatic driver sandbox and recovery system would be an excellent investment in kernel developer time in return for overall system stability, particularly for distribution vendors. Even implementing a subset of the features in the shadow driver system would be helpful. More than likely, Linux 2.6 has frameworks which would easily lend themselves to implementing some of these features.

Comments (8 posted)

RCU and Unloadable Modules

January 14, 2007

This article was contributed by Paul McKenney

RCU (read-copy update) is a synchronization mechanism that can be thought of as a replacement for read-writer locking (among other things), but with very low-overhead readers that are immune to deadlock, priority inversion, and unbounded latency. RCU read-side critical sections are delimited by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT kernels, generate no code whatsoever.

This means that RCU writers are unaware of the presence of concurrent readers, so that RCU updates to shared data must be undertaken quite carefully, leaving an old version of the data structure in place until all pre-existing readers have finished. These old versions are needed because such readers might hold a reference to them. RCU updates can therefore be rather expensive, and RCU is thus best suited for read-mostly situations.

How can an RCU writer possibly determine when all readers are finished, given that readers might well leave absolutely no trace of their presence? There is a synchronize_rcu() primitive that blocks until all pre-existing readers have completed. An updater wishing to delete an element p from a linked list might do the following, while holding an appropriate lock, of course:

list_del_rcu(p); synchronize_rcu(); kfree(p); But the above code cannot be used in IRQ context -- the call_rcu() primitive must be used instead. This primitive takes a pointer to an rcu_head struct placed within the RCU-protected data structure and another pointer to a function that may be invoked later to free that structure. Code to delete an element p from the linked list from IRQ context might then be as follows: list_del_rcu(p); call_rcu(&p->rcu, p_callback); Since call_rcu() never blocks, this code can safely be used from within IRQ context. The function p_callback() might be defined as follows: static void p_callback(struct rcu_head *rp) { struct pstruct *p = container_of(rp, struct pstruct, rcu); kfree(p); }

Unloading Modules That Use call_rcu()

But what if p_callback is defined in an unloadable module?

[Cartoon] If we unload the module while some RCU callbacks are pending, the CPUs executing these callbacks are going to be severely disappointed when they are later invoked, as fancifully depicted on the right.

We could try placing a synchronize_rcu() in the module-exit code path, but this is not sufficient. Although synchronize_rcu() does wait for a grace period to elapse, it does not wait for the callbacks to complete.

One might be tempted to try several back-to-back synchronize_rcu() calls, but this is still not guaranteed to work. If there is a very heavy RCU-callback load, then some of the callbacks might be deferred in order to allow other processing to proceed. Such deferral is required in realtime kernels in order to avoid excessive scheduling latencies.

rcu_barrier()

We instead need the rcu_barrier() primitive. This primitive is similar to synchronize_rcu(), but instead of waiting solely for a grace period to elapse, it also waits for all outstanding RCU callbacks to complete. Pseudo-code using rcu_barrier() is as follows:
  1. Prevent any new RCU callbacks from being posted.
  2. Execute rcu_barrier().
  3. Allow the module to be unloaded.
Quick Quiz #1: Why is there no srcu_barrier()?

Quick Quiz #2: Why is there no rcu_barrier_bh()?

The rcutorture module makes use of rcu_barrier in its exit function as follows:

1 static void 2 rcu_torture_cleanup(void) 3 { 4 int i; 5 6 fullstop = 1; 7 if (shuffler_task != NULL) { 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); 9 kthread_stop(shuffler_task); 10 } 11 shuffler_task = NULL; 12 13 if (writer_task != NULL) { 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); 15 kthread_stop(writer_task); 16 } 17 writer_task = NULL; 18 19 if (reader_tasks != NULL) { 20 for (i = 0; i < nrealreaders; i++) { 21 if (reader_tasks[i] != NULL) { 22 VERBOSE_PRINTK_STRING( 23 "Stopping rcu_torture_reader task"); 24 kthread_stop(reader_tasks[i]); 25 } 26 reader_tasks[i] = NULL; 27 } 28 kfree(reader_tasks); 29 reader_tasks = NULL; 30 } 31 rcu_torture_current = NULL; 32 33 if (fakewriter_tasks != NULL) { 34 for (i = 0; i < nfakewriters; i++) { 35 if (fakewriter_tasks[i] != NULL) { 36 VERBOSE_PRINTK_STRING( 37 "Stopping rcu_torture_fakewriter task"); 38 kthread_stop(fakewriter_tasks[i]); 39 } 40 fakewriter_tasks[i] = NULL; 41 } 42 kfree(fakewriter_tasks); 43 fakewriter_tasks = NULL; 44 } 45 46 if (stats_task != NULL) { 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); 48 kthread_stop(stats_task); 49 } 50 stats_task = NULL; 51 52 /* Wait for all RCU callbacks to fire. */ 53 rcu_barrier(); 54 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ 56 57 if (cur_ops->cleanup != NULL) 58 cur_ops->cleanup(); 59 if (atomic_read(&n_rcu_torture_error)) 60 rcu_torture_print_module_parms("End of test: FAILURE"); 61 else 62 rcu_torture_print_module_parms("End of test: SUCCESS"); 63 } Line 6 sets a global variable that prevents any RCU callbacks from re-posting themselves. This will not be necessary in most cases, since RCU callbacks rarely include calls to call_rcu(). However, the rcutorture module is an exception to this rule, and therefore needs to set this global variable.

Lines 7-50 stop all the kernel tasks associated with the rcutorture module. Therefore, once execution reaches line 53, no more rcutorture RCU callbacks will be posted. The rcu_barrier() call on line 53 waits for any pre-existing callbacks to complete.

Then lines 55-62 print status and do operation-specific cleanup, and then return, permitting the module-unload operation to be completed.

Quick Quiz #3: Is there any other situation where rcu_barrier() might be required?

Your module might have additional complications. For example, if your module invokes call_rcu() from timers, you will need to first cancel all the timers, and only then invoke rcu_barrier() to wait for any remaining RCU callbacks to complete.

Implementing rcu_barrier()

Dipankar Sarma's implementation of rcu_barrier() makes use of the fact that RCU callbacks are never reordered once queued on one of the per-CPU queues. His implementation queues an RCU callback on each of the per-CPU callback queues, and then waits until they have all started executing, at which point, all earlier RCU callbacks are guaranteed to have completed.

The code for rcu_barrier() is as follows:

1 void rcu_barrier(void) 2 { 3 BUG_ON(in_interrupt()); 4 /* Take cpucontrol mutex to protect against CPU hotplug */ 5 mutex_lock(&rcu_barrier_mutex); 6 init_completion(&rcu_barrier_completion); 7 atomic_set(&rcu_barrier_cpu_count, 0); 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1); 9 wait_for_completion(&rcu_barrier_completion); 10 mutex_unlock(&rcu_barrier_mutex); 11 } Line 3 verifies that the caller is in process context, and lines 5 and 10 use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the global completion and counters at a time, which are initialized on lines 6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is shown below. Note that the final "1" in on_each_cpu()'s argument list ensures that all the calls to rcu_barrier_func() will have completed before on_each_cpu() returns. Line 9 then waits for the completion.

The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() to post an RCU callback, as follows:

1 static void rcu_barrier_func(void *notused) 2 { 3 int cpu = smp_processor_id(); 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu); 5 struct rcu_head *head; 6 7 head = &rdp->barrier; 8 atomic_inc(&rcu_barrier_cpu_count); 9 call_rcu(head, rcu_barrier_callback); 10 } Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure, which contains the struct rcu_head that needed for the later call to call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line 8 increments a global counter. This counter will later be decremented by the callback. Line 9 then registers the rcu_barrier_callback() on the current CPU's queue.

The rcu_barrier_callback() function simply atomically decrements the rcu_barrier_cpu_count variable and finalizes the completion when it reaches zero, as follows:

1 static void rcu_barrier_callback(struct rcu_head *notused) 2 { 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) 4 complete(&rcu_barrier_completion); 5 }

Quick Quiz #4: What happens if CPU 0's rcu_barrier_func() executes immediately (thus incrementing rcu_barrier_cpu_count to the value one), but the other CPU's rcu_barrier_func() invocations are delayed for a full grace period? Couldn't this result in rcu_barrier() returning prematurely?

rcu_barrier() Summary

The rcu_barrier() primitive has seen relatively little use, since most code using RCU is in the core kernel rather than in modules. However, if you are using RCU from an unloadable module, you need to use rcu_barrier() so that your module may be safely unloaded.

Answers to Quick Quizzes

Quick Quiz #1: Why is there no srcu_barrier()?

Since there is no call_srcu(), there can be no outstanding SRCU callbacks. Therefore, there is no need to wait for them.

Quick Quiz #2: Why is there no rcu_barrier_bh()?

Because no one has needed it yet. As soon as someone needs to use call_rcu_bh() from within an unloadable module, they will need an rcu_barrier_bh().

Quick Quiz #3: Is there any other situation where rcu_barrier() might be required?

Interestingly enough, rcu_barrier() was not originally implemented for module unloading. Nikita Danilov was using RCU in a filesystem, which resulted in a similar situation at filesystem-unmount time. Dipankar Sarma coded up rcu_barrier() in response, so that Nikita could invoke it during the filesystem-unmount process.

Much later, yours truly hit the RCU module-unload problem when implementing rcutorture, and found that rcu_barrier() solves this problem as well.

Quick Quiz #4: What happens if CPU 0's rcu_barrier_func() executes immediately (thus incrementing rcu_barrier_cpu_count to the value one), but the other CPU's rcu_barrier_func() invocations are delayed for a full grace period? Couldn't this result in rcu_barrier() returning prematurely?

This cannot happen. The reason is that on_each_cpu() has its last argument, the wait flag, set to "1". This flag is passed through to smp_call_function() and further to smp_call_function_on_cpu(), causing this latter to spin until the cross-CPU invocation of rcu_barrier_func() has completed. This by itself would prevent a grace period from completing on non-CONFIG_PREEMPT kernels, since each CPU must undergo a context switch (or other quiescent state) before the grace period can complete. However, this is of no use in CONFIG_PREEMPT kernels.

Therefore, on_each_cpu() disables preemption across its call to smp_call_function() and also across the local call to rcu_barrier_func(). This prevents the local CPU from context switching, again preventing grace periods from completing. This means that all CPUs have executed rcu_barrier_func() before the first rcu_barrier_callback() can possibly execute, in turn preventing rcu_barrier_cpu_count from prematurely reaching zero.

Currently, -rt implementations of RCU keep but a single global queue for RCU callbacks, and thus do not suffer from this problem. However, when the -rt RCU eventually does have per-CPU callback queues, things will have to change. One simple change is to add an rcu_read_lock() before line 8 of rcu_barrier() and an rcu_read_unlock() after line 8 of this same function. If you can think of a better change, please let me know!

Comments (5 posted)

Patches and updates

Kernel trees

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Janitorial

Memory management

Networking

Virtualization and containers

Miscellaneous

Page editor: Forrest Cook

Distributions

News and Editorials

LCA: How to improve Debian security

Russell Coker is a long-time figure in the Linux security world, having done much of the heavy lifting involved in making SELinux work with both the Debian and Fedora distributions. At the Debian miniconf at linux.conf.au, Russell ran a session on what Debian should do to improve [Russell Coker] its security. With a relatively small number of changes, Debian could be made significantly harder to break into.

The first suggested change is not Debian-specific in any way: Russell makes the claim that Linux needs to support more capabilities. The Linux capability system attempts to break down the "can do anything" superuser privileges into less powerful capabilities, with the idea that programs can be restricted to the privileges they actually need to get their jobs done. Unfortunately, this splitting of privileges is incomplete, in that two of them are still too powerful. They are:

  • CAP_NET_ADMIN controls the management of IP tunnels, type of service settings, routes, interface parameters, raw packet access, and much more. There are many unrelated powers which are granted by CAP_NET_ADMIN; splitting them up would make the system more secure in dealing with potentially buggy network processes.

  • CAP_SYS_ADMIN is even worse, being the grab-bag capability used whenever somebody can't find something more specific. This capability controls access to disk quotas, the mounting of filesystems, NVRAM access, serial port parameters, memory management policies, and dozens of other actions. Getting CAP_SYS_ADMIN is not far removed from simply having superuser powers.

Russell talked about the benefits of splitting up these capabilities, but didn't get much into the practical difficulties. Those include the fact that the 32-bit capability mask is just about full already, the need to educate developers and administrators about the new capabilities, and the task of changing the current capability tests and dealing with the things that break. It's an obviously good idea, but carrying it through will require some work.

Next on Russell's list is polyinstantiated directories. In words of fewer syllables, this means directories where each user gets his or her own, private copy. When applied to shared directories like /tmp, polyinstantiated directories can help defend the system against symbolic link and temporary file attacks. The necessary support is already there - the kernel has filesystem namespaces, shared subtrees, and the PAM modules to control these features. It's just a matter of hooking it all together in a way that works.

The ExecShield patch set is the next suggestion. In particular, Russell would like to see protection against executable stack and writable memory-mapped segments. As he pointed out, Fedora and Red Hat Enterprise Linux have shipped this feature for some time with little in the way of ill effects. It's mostly a matter of getting some of the remaining patches into the kernel mainline - or maintaining them separately in the Debian kernel.

The TIOCSTI ioctl() command allows a process to stuff characters into a terminal device, from which they will later be read. If a hostile user can get an administrator to switch over to his account (with su, say), he can use this ioctl() to take over the administrator's shell. Ways of avoiding this attack include not using su in a number of situations - for example, by using ssh to log in as another user. The setsid() system call can also be used to create a barrier to defend against character-stuffing attacks.

Next is better support for Xen, especially at install time. Russell would like to be able to install a Debian server system where the only thing found in the host domain is an SSH server and the tools needed to get the guest domain running. All of the real server tasks would run in the guest. Then, if that guest is compromised, the core server's integrity remains, and it can be used to examine the guest closely. Among other things, rootkits running in the guest will have a much harder time hiding from an administrator running on the host.

Finally, Russell suggested that the Debian release following etch should install and run SELinux by default - just like Fedora does. Just running SELinux improves security, but things get better when the developers use it as well. SELinux can block attacks, but, when used by developers, it can reveal security-related bugs before anybody gets a chance to exploit them. In essence, SELinux is a language which is used to describe the expected behavior of an application; when the application deviates from the expectations, SELinux sounds the alarm and allows the situation to be investigated.

Comments (14 posted)

New Releases

BLAG 60000 (flout) Released

The Fedora Core 6-based BLAG 60000 is available from BLAG Linux and GNU. "BLAG 60000 (flout) is a new series with a new base (FC6) and many new applications. Featuring all of the applications below on JUST ONE CD. Burn copies and hand them out! It's got it all. Did I mention it's all on just one CD?"

Full Story (comments: none)

FreeBSD 6.2 released

The FreeBSD Release Engineering Team has announced the availability of FreeBSD 6.2-RELEASE. "This release continues the development of the 6-STABLE branch providing performance and stability improvements, many bug fixes and new features."

Comments (none posted)

FreeSBIE-2.0 released

The FreeSBIE team has announced the release of FreeSBIE 2.0, a live CD based on FreeBSD. "Development cycle started on August 2006 and, after many months and a series of four ISO images, an official stable FreeSBIE image is available. It went under many changes, many experiments, many bugfixes, many features' additions, but it was worth the work and the time we spent on it. We must express our thanks to everyone involved in the release process. FreeSBIE 2.0-RELEASE (codename Clint Eastwood) is based on the fresh FreeBSD 6.2-RELEASE, both in terms of sources and of packages. It contains more than 450 pieces and 1,3 gigabytes of software, all in a single CD-ROM of 668 megabytes."

Comments (none posted)

IPCop Firewall 1.4.13 released (SourceForge)

Version 1.4.13 of IPCop Firewall has been announced. "IPCop is a friendly firewall solution running on linux to protect networks. It will be geared towards home and SOHO users. Interface is task based. Hardware requirements could be very minimal and grow with services used. IPCop v1.4.13 is released unchanged from 1.4.13rc1. This release update a few tools due to security issues, fix bugs and update some drivers. As usual, this version can be installed as an update from previous v1.4.x versions or with a ready-to-go ISO or usb bootable images for a fresh install."

Comments (none posted)

Ubuntu Herd 2 released

Ubuntu has released the second Feisty Fawn Herd CD on the road to Ubuntu 7.04. "The primary focus during the time from Herd 1 have been the re-merging of changes from Debian and inclusion of new versions of applications. Notably, we have upgraded the kernel to 2.6.20." The Herd 2 CD is available for Ubuntu, Kubuntu, Edubuntu and Xubuntu.

Full Story (comments: none)

Distribution News

Mandriva at the Solutions Linux 2007 summit.

Mandriva will be participating in the Solutions Linux summit, Jan.30th to Feb. 1st 2007. "Mandriva will take advantage of this event to share with the guests its vision of Linux and its passion for the open source sector. Besides the Mandriva philosophy, marrying both advance technology and respect for the open source community, you will be able to discover all the products developed by the company."

Full Story (comments: none)

BLAG 60000: Shipped to you for free

BLAG and The Linux Store have an arrangement where they will ship you the BLAG 60000 CD for free. Click below for information on how to order.

Full Story (comments: none)

Distribution Newsletters

Fedora Weekly News Issue 73

This week's Fedora Weekly News covers the New Fedora Infrastructure Leader, GPG Keysigning at FUDcon, Preparation continues for SCALE 5X, Fedora Core 6 LiveCD Review, Red Hat's Fedora to Get Longer Support, and several other topics.

Comments (none posted)

Gentoo Weekly Newsletter

The Gentoo Weekly Newsletter for January 8, 2007 looks at new Bugzilla servers, Gentoo on the HP iPAQ hx4700, SCALE to host Women in Open Source mini conference, interview with Derek Wise of GNi, and much more.

Comments (none posted)

Gentoo Weekly Newsletter

The Gentoo Weekly Newsletter for January 15, 2007 is also available. Topics include Maintainer needed for gentoo-sources-2.4, Simplified Chinese translation team seeking help, Gentoo classes at MIT, and more.

Comments (none posted)

DistroWatch Weekly, Issue 185

The DistroWatch Weekly for January 15, 2007 is out. "A somewhat slow week was concluded with a long-awaited new release of FreeBSD 6.2; we'll take a quick look at the new version and add a few more interesting bits and pieces from the BSD world. Besides covering the most popular BSD operating system, we also continue reviewing some of the promising new releases of 2006; this week it's the turn of Pardus Linux - an independently developed distribution with a superb package management infrastructure. In the news section, gNewSense starts work on a new release, a developer announces a Debian-based live CD for the Sony PlayStation 3, and Sun Microsystems offers a free DVD with Solaris 10 to all who are interested in checking out the venerable UNIX operating system."

Comments (none posted)

Package updates

Fedora updates

Updates for Fedora Core 6: xterm (update to 223), autofs (bug fixes), glibc (bug fix), gcc (update from gcc-4_1-branch), cpuspeed (numerous bug fixes), postgresql (update to PostgreSQL 8.1.6), shadow-utils (bug fix), gimp-print (bug fix), lm_sensors (update lm_sensors to 2.10.1), linuxdoc-tools (bug fixes), util-linux (bug fix), m4 (bug fix), selinux-policy (bug fixes), cpuspeed (bug fixes), jpackage-utils (bug fixes), tar (bug fixes), gawk (bug fix), evolution-data-server (bug fix), gawk (bug fixes), udev (merge RHEL bugfixes), gnucash (update to 2.0.4), squid (update to the latest upstream), shadow-utils (bug fix), gettext (bug fix), python-numeric (update to 24.2), sysklogd (fix IPv6 patch), libselinux (bug fix), yum (update to 3.0.3), yum-metadata-parser (update to 1.0.3), udev (merge RHEL bugfixes), avahi (bug fix), nspr (upstream patch to fix ipv6 support), xen (bug fixes), system-config-printer (bug fix update), autofs (bug fix), foomatic (database update), strace (bug fixes), libselinux (man page fix).

Updates for Fedora Core 5: postgresql (update to PostgreSQL 8.1.6), gawk (bug fixes), logwatch (fix several logwatch services), xen (bug fixes), nspr (upstream patch to fix ipv6 support), strace (bug fixes).

Comments (none posted)

Mandriva updates

Updates for Mandriva Linux 2007.0: nmap (bug fixes), desktop-common-data (add a menu item), lirc (fix for SMP-enabled kernels), bluez-utils (bug fix), perl-SOAP-Lite (bug fix), wvstreams (built with openssl 0.9.8), tripwire (bug fix).

Comments (none posted)

rPath updates

Updates for rPath Linux 1: conary, conary-build, conary-repository (Conary 1.1.15 maintenance release), spamassassin, perl-IO-Socket-SSL, perl-IO-Zlib, perl-Archive-Tar, perl-IP-Country, perl-Net-CIDR-Lite, perl-Net-Ident, perl-Sys-Hostname-Long, perl-Mail-SPF-Query, perl-Algorithm-Diff, perl-Text-Diff (add spamassassin dependencies).

Comments (none posted)

Ubuntu updates

Updates for Ubuntu 6.10: gnome-system-tools (bug fixes), gnome-vfs2 (bug fixes), gnome-vfs2 (another bug fix), pouetchess (bug fixes), mousepad (bug fix), vino (upload to edgy-updates), gtetrinet (bug fixes), tzdata (upload of the -proposed version to -updates).

Updates for Ubuntu 6.06 LTS: langpack-locales (bug fixes).

Comments (none posted)

Distribution reviews

DeLi Linux: A light Linux distribution, done right (Linux.com)

Linux.com reviews DeLi Linux. "Perhaps one of the best Linux distributions tailored for older hardware is DeLi Linux. It's simple, and performs well enough to run on hardware as old as a 486. In fact, DeLi Linux runs on anything better than a 386 with at least 4MB of memory, though if you have only 4MB, don't expect stellar performance. Things get decent at 8MB, 16MB is smooth, and 32MB or more is perfect. I tested DeLi Linux on several machines, ranging from a 66MHz 486 DX2 with 8MB of RAM up to a a Dell Pentium III system with 256MB of RAM. The 486 system struggled to open anything, taking several minutes if things got too complex, such as when I was running a window manager, the X server, and AbiWord. However, DeLi Linux surprised me by turning the old 486 into an usable system, provided I had patience to spare. What's more, the Pentium III was extremely responsive, being even faster than my main AMD64 system running Fedora Core 6."

Comments (none posted)

Fedora releases a live CD (Linux.com)

Mayank Sharma reviews the first Fedora live CD on Linux.com. "The Fedora community got its first official live CD last month. Based on Fedora Core 6, it shows off the best of what Fedora has to offer. Furthermore, the tools used to put together the CD make creating and maintaining custom Red Hat or Fedora-based live CDs simple. The live CD comes as a 684MB ISO that supports only the i386 architecture. The compressed filesystem holds about 2.3GB of applications -- a fraction of applications and utilities in the five-CD set that makes up Fedora Core 6. It runs Linux kernel 2.6.18 and the latest stable GNOME (2.16) and X.org (7.1). There's no cosmetic difference between the live CD and FC6 apart from wallpaper that reflects its time of release."

Comments (2 posted)

Ubuntu 6.10, OpenSUSE 10.2 Rise to (and in Some Ways Above) Microsoft's Vista Challenge (eWeek)

eWeek reviews Ubuntu 6.10 and OpenSUSE 10.2. "Ubuntu 6.10, also known as Edgy Eft, is the latest release in the popular line of Linux operating systems from Canonical. Ubuntu is a fairly young distribution, but its roots in Debian give it a solid foundation—both in terms of its code and in its community of users. This strong foundation is most evident in Ubuntu's excellent software management tools and wide catalog of prepackaged software. Ubuntu's catalog surpasses those of all other Linux distributions we've tested, and its software management tools outclass not only Linux rivals' but also Microsoft Windows' and Apple OS X's."

Comments (23 posted)

Page editor: Rebecca Sobol

Development

Twisted reaches the 2.5.0 milestone

Twisted is an event-driven networking framework written in the Python language that is being developed by Twisted Matrix Labs. Twisted has been released under the MIT license.

Like any engine, Twisted has many "moving parts." Twisted Projects is our name for these components of Twisted. Taken together, they form the whole of Twisted. Twisted projects variously support TCP, UDP, SSL/TLS, multicast, Unix sockets, a large number of protocols (including HTTP, NNTP, IMAP, SSH, IRC, FTP, and others), and much more.
[Twisted Matrix Labs]

See the list of Twisted Projects to get an idea of what Twisted has been used for. The Twisted FAQ explains some of the advantages of using Twisted, these include good security, stable code, rapid development time and more. The project documentation helps new users get started with tutorials, an API reference, howtos, examples and a developer guide.

Version 2.5 of Twisted was recently announced. "Twisted 2.5.0 is a major feature release, with several interesting new developments and a great number of bug fixes."

New features in version 2.5 include:

  • The Asynchronous Messaging Protocol, a simple request/response system for persistent connections.
  • An Epoll-based reactor for improving performance in high network traffic situations.
  • The ability to process sub-commands from the command line.
  • Support for version 2.5 of the Python language.
  • Support for inlineCallbacks has been added, this takes advantage of the Python 2.5 yield expression.
  • Improvements to the Jabber capabilities in the twisted.words chat project.
For more information on the changes and bug fixes, see the version 2.5 release notes.

If you would like to give Twisted 2.5 a spin, the code is available for download here.

Comments (none posted)

System Applications

Clusters and Grids

Release 2.0.8 of Linux-HA is available

Version 2.0.8 of Linux-HA, aka heartbeat and OpenHA, a cluster management system, is out. "There are many significant features and numerous mostly-minor bug fixes in this release." New features include support for split-site geographic configurations, improvements to the CRM placement algorithms, support for IBM xSeries STONITH devices and several new resource agents.

Full Story (comments: none)

Database Software

PostgreSQL Weekly News

The January 14, 2007 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.

Full Story (comments: none)

SQLite 3.3.10 released

Version 3.3.10 of SQLite, a light weight DBMS, is out. "Version 3.3.10 fixes several bugs that were introduced by the previous release. Upgrading is recommended."

Comments (none posted)

wxSQLite3 1.7.0 released (SourceForge)

Version 1.7.0 of wxSQLite3 has been announced. "The new version 1.7.0 of wxSQLite3 - a thin wrapper for the SQLite database for wxWidgets applications - now supports the current version 3.3.10 of SQLite. Support for BLOBs as wxMemoryBuffer objects and for loadable extensions has been added. Optional support for key based database encryption is also included."

Comments (none posted)

Libraries

Release of libnfnetlink, libnfnetlink_conntrack and libnetfilter_queue

The netfilter project has announced the release of libnfnetlink 0.0.25, libnfnetlink_conntrack 0.0.50 and libnetfilter_queue 0.0.13. "libnfnetlink is the low-level userspace library for nfnetlink based communication between the kernel-side netfilter and the userspace world. libnfnetlink_conntrack is the library for userspace access to the in-kernel connection tracking table. libnetfilter_queue is the library to filter and mangle packets from userspace".

Full Story (comments: none)

Mail Software

bogofilter 1.1.5 released

Version 1.1.5 of bogofilter, a spam filter, is out. "This release fixes a problem in the block-on-subnets option and fixes a Makefile problem for MAC-OSX."

Full Story (comments: none)

Networking Tools

conntrackd 0.9.2 released

Version 0.9.2 of conntrackd is out, changes include support for a new NACK based protocol and removal of a dependency on the unofficial libraries. "Conntrackd is the userspace daemon for the Connection Tracking System. This daemon maintains a copy of the Connection Tracking System in userspace. It is entirely written in C and is highly configurable and easily extensible. Currently it covers the specific aspects of Stateful Linux firewalls to enable High Availability (HA) solutions and can be used as statistics collector of the firewall use."

Full Story (comments: none)

Virtualization Software

VirtualBox OSE released under the GPL

InnoTek InnoTek has announced the release of VirtualBox Open Source Edition under the GNU General Public License. "VirtualBox OSE is the first professional PC virtualization solution released as open source under the GNU General Public License (GPL). With VirtualBox, customers get the most versatile virtualization product on the market, both for enterprise and individual use. VirtualBox' open source license allows everyone to contribute to the development of the product and customize it to suit individual needs." (Thanks to Daniel de Kok.)

Comments (2 posted)

Web Site Development

Apache HTTP server 2.2.4 released

Version 2.2.4 of the Apache web server has been released. "The Apache Software Foundation and the Apache HTTP Server Project are pleased to announce the release of version 2.2.4 of the Apache HTTP Server ("Apache"). This version of Apache is principally a bugfix release. We consider this release to be the best version of Apache available, and encourage users of all prior versions to upgrade."

Full Story (comments: none)

Plume CMS 1.2.2 Released (SourceForge)

Version 1.2.2 of Plume CMS has been released, it features bug fixes. "Plume CMS is a fully functional Content Management System in PHP on top of MySQL. Including articles, news, file management and all of the general functionalities of a CMS. It is completely accessible and very easy to use on a daily basis."

Comments (none posted)

Desktop Applications

BitTorrent Applications

Azureus 2.5.0.2 released (SourceForge)

Version 2.5.0.2 of Azureus has been released. "Azureus is a powerful, full-featured, cross-platform Java BitTorrent client. This release contains new features, improvements and fixes, such as reduced memory footprint and faster startup times."

Comments (none posted)

Desktop Environments

GNOME 2.17.5 released

Version 2.17.5 of the GNOME desktop environment has been announced. "Oh, I believe it's important to mention this is also the release which marks the start of the API/ABI freeze for the platform and the start of the feature freeze. If you break the freezes, we'll send some crazy people make you understand what a freeze is. Crazy people as in French people. Or even crazier ones like persons who're going to Australia for GNOME.conf.au (starting next Monday)."

Full Story (comments: none)

GARNOME 2.17.5 released

Version 2.17.5 of GARNOME, the bleeding-edge GNOME distribution, is out. "We are pleased to announce the release of GARNOME 2.17.5 Desktop and Developer Platform. This release includes all of GNOME 2.17.5 plus a whole bunch of updates that were released after the GNOME freeze date. This is the fifth release in the unstable cycle, with more features, more fixes and yet more madness added. It is for anyone who wants to get his hands dirty on the development branch, or who'd like to get a peek at future features."

Full Story (comments: none)

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at gnomefiles.org.

Comments (none posted)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at kde-apps.org.

Comments (none posted)

KDE Commit-Digest (KDE.News)

The January 14, 2007 edition of the KDE Commit-Digest has been announced. The content summary says: "In this week's KDE Commit-Digest: NEPOMUK integration, and a new "browser" interface added to Akonadi. Refactoring work in Kate and KPilot. Experiments with a new Kate-alike session list in Konsole. Expose-like window management effects in KWin. Support for styling the background of forms in KHTML. A Strigi-based metadata indexer for KIO. Signature support in Mailody. Improved support for metadata internal storage and display, and a new Flake shape for video in KOffice. Large code update in Umbrello, part of the Student Mentoring program. New tileset selector for kdegames, to be shared between KMahjongg and KShisen. Support for the FictionBook format in okular. Import of an initial version of the Oxygen sound theme for KDE 4. Import of user documentation for Kompare. kaction-cleanup-branch merged back into the main kdelibs/. Security fixes in KPDF and KSirc."

Comments (none posted)

Xfce 4.2.4 released

Version 4.2.4 of Xfce, a light weight desktop environment, is out. "A new bug fix release of Xfce 4.2 is available. This release is supposed to be the last release for the 4.2 branch. It includes several fixes ported from the current developpment branch. This release should not be confused with the upcoming Xfce 4.4 release, it's a bug fix release of the previous stable branch."

Comments (none posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Electronics

eispice 0.10 released

Version 0.10 of eispice, a clone of the Berkley Spice 3 Simulation Engine optimized for High Speed Digital Design with a Python based front-end, has been announced. "The performance of the PyB model was greatly improved for this release. PyB error messages have been improved, along with a handful of other minor bug-fixes. This release also coincides with the release of a new eispice IDE (eide) which includes a PyQt based test editor, a Python Interpreter, and eispice rolled into a single application using pyinstaller, it is primarily intended for Windows Users."

Comments (none posted)

Fonts and Images

Linux Libertine 2.3.2 released

Version 2.3.2 of the Linux Libertine font set has been announced. "We just released LinuxLibertine in version 2.3.2. Please test and give us feedback."

Full Story (comments: none)

Games

Varconf 0.6.5 released

Version 0.6.5 of Varconf has been announced on the WorldForge game site. "Varconf is configuration handling library required by many WorldForge components. It supports the loading and saving of config files, handling of complex command line arguments, and signals to notify the application of configuration changes. Major changes in this version: Special characters can now be escaped in config strings. The library now uses the current sigc++ native API, so is more efficient."

Comments (none posted)

Graphics

Release of Crystal Space 1.0 (SourceForge)

Version 1.0 of Crystal Space has been announced. "After nearly 10 years of development we are very proud to release version 1.0 of Crystal Space. Crystal Space is an Open Source and portable 3D engine framework which runs on GNU/Linux, Windows, and MacOS/X. It is fully featured with support for vertex and fragment shaders, dynamic lighting and lightmaps, skeletal animation, physics, 3D sound, terrain engine, python support and much more. Together with Crystal Space 1.0 we also release Crystal Entity Layer 1.0. This is a game layer on top of Crystal Space which makes it easier to develop games."

Comments (none posted)

Multimedia

swfdec 0.4.1 released

Version 0.4.1 of swfdec, a decoder/renderer for Macromedia Flash animations, is out. "Apart from the usual loads of bugfixing the big thing in this release is initial video support. Unfortunately this does not mean that all the video sites will play yet, but it's one step in that direction."

Full Story (comments: none)

Video Applications

Theorur 0.5.2 released

Version 0.5.2 of Theorur has been announced. "Theorur is a GUI for streaming Ogg/Theora to an icecast server. It supports A/V input from v4l or IEEE 1394 devices. Theorur needs dvgrab, ffmpeg2theora, and oggfwd_im."

Comments (none posted)

Word Processors

AbiWord 2.5.0 released (GnomeDesktop)

GnomeDesktop.org has an announcement for version 2.5.0 of the AbiWord word processor. "The AbiWord team is very proud to announce version AbiWord v2.5.0 of the popular cross platform word processor. This is the first snapshot of the development that will lead to AbiWord 2.6. This snapshot allows interested developers, testers and users a sneak preview into the future of AbiWord."

Comments (none posted)

Languages and Tools

Caml

Caml Weekly News

The January 16, 2007 edition of the Caml Weekly News is out with new Caml language articles.

Full Story (comments: none)

Perl

Weekly Perl 6 mailing list summary (O'Reilly)

The January 7-13, 2007 edition of the Weekly Perl 6 mailing list summary is out with coverage of the latest Perl 6 developments.

Comments (none posted)

Python

python-dev Summary

The python-dev Summary is out with coverage of the python-dev mailing list for the period of December 1-15, 2006.

Full Story (comments: none)

A Quickstart to building GUI based applications in Python (Builder AU)

Nick Gibson shows how to do GUI programming with Python and Tk. "When you're learning a new language, particularly a scripting language such as Python, you might be forced to stick to console based programs for some time before you've picked up enough to start writing graphical based programs. It's now been more than 25 years since the first commercial graphical user interface was released (for the curious, the Xerox STAR) and it seems a little archaic to still be using the console for applications. Thankfully Python's emphasis on simplicity means that you can include a graphical user interface in your programs without needing to be a Python guru. To prove this, I'll run through the creation of a simple note taking program, using the standard GUI toolkit for Python: Tk."

Comments (none posted)

Tcl/Tk

Tcl-URL!

The January 17, 2007 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.

Full Story (comments: none)

XML

Is XML 2.0 Under Development? (O'Reilly)

Micah Dubinko investigates the status of XML 2.0 on O'Reilly's XML.com. "In Micah Dubinko's return to the XML Annoyances banner, he speculates as to whether the W3C is already considering whether to start work on XML 2.0. Read this piece and decide for yourself."

Comments (none posted)

IDEs

Anjuta DevStudio 2.1.0 (beta) released (GnomeDesktop)

GnomeDesktop covers the release of Anjuta DevStudio 2.1.0 beta, an IDE for C and C++. " The long anticipated Anjuta DevStudio 2.1.0, The Wind, is now at your service. This proud announcement from Anjuta development team marks the beginning of beta stage for Anjuta 2.x series. This is the moment we all have so eagerly been waiting for and so wishfully been complaining about. Following the footsteps of last alpha release 2.0.2 'The Breeze', this release brings our promise of the much dreamed newgen GNOME IDE very close. 'The Wind' marks our way towards a truly powerful and simple-yet-subtle IDE, so far tentatively name 'The Storm'."

Comments (none posted)

Libraries

libgpod 0.4.2 released (SourceForge)

Version 0.4.2 of libgpod has been announced. "libgpod is a library that allows you to fill your iPod with content. Supported are all iPod models. On the Audio/Video content side, this release is a service update implementing support for new features like gapless replay or skip count. On the Photo content side, this release gives the first functional interface for the photo library. A test program demonstrates the ease-of use."

Comments (none posted)

Miscellaneous

BNF for Java: 'Visual' release (SourceForge)

A new release of BNF for Java is available. "BNF for Java is a BNF Compiler-Compiler, or Parser-Generator. It implements ISO Standard Backus-Naur Format, using Java. BNF allows you to create a syntax, or a complete language, to parse your data source. Your custom Java extensions generate output. This first release for 2007 adds a major improvement in the "Visual" user-interface for the BNF programmer. You can see your BNF and Java files, you can see the results from the compilers, you can track bugs with text reports. You can compile your compiler and run your parser/generator project from the GUI."

Comments (none posted)

Page editor: Forrest Cook

Linux in the news

Recommended Reading

Tivo Healthcare (Free Software Magazine)

Fred Trotter discusses the Tivoization of health care data in a Free Software Magazine article. "Well, consider what would happen if my software and the operating system underneath it were Tivoized. I help write an GPL Electronic Health Record (EHR). Under the current version of the GPL, someone could make an appliance from my software and GNU/Linux and prevent people from modifying and controlling healthcare data stored in the Electronic Health Record that ran on this device. What the Tivoization traps is the data, which for Tivo means movies and television shows recorded digitally. But what happens when the data that is trapped is infinitely more valuable? When we discuss DRM, we should be thinking of an EHR that has been Tivoized, (perhaps a health-Tivo) rather than a television recording device."

Comments (19 posted)

Vista launch will boost desktop Linux (ZDNet Australia)

ZDNet Australia suggests that Microsoft's launch of Windows Vista will give companies a new reason to switch to Linux. "The launch of Windows Vista has created a huge opportunity for Linux vendors to take a larger share of the corporate desktop market, according to the president of Linux Australia. New features combined with a slightly different look and feel mean that migrating to Vista from an older version of Windows will cause disruption in the workplace. On the first day of Linux.conf.au, the president of Linux Australia, Jonathon Oxer, told ZDNet Australia that instead of retraining staff on the new version of Windows, administrators could make the switch to Linux."

Comments (15 posted)

Trade Shows and Conferences

CES 2007 coverage (PC Magazine)

PC Magazine covers day 1 and day 2 of the Consumer Electronics Show (CES). "On Day Two of CES 2007, the news, announcements, and analysis kept rolling in, and our editors and analysts were on top of it all. Find out about the Blu-ray Consortium's plans for world domination, how you'll be getting TV on your PC and vice versa, a camera sensor that sees in the dark, and lots more, in this selective sample of today's stories."

Comments (none posted)

The silent victory of Linux-as-geology at CES 2007 (Linux Journal)

Linux Journal's Doc Searls covers the Linux side of the 2007 Consumer Electronics Show. "Three years ago, out of more than 2300 CES exhibitors, the word "Linux" appeared in text associated with just 11 of them, in the show's online guide. This year at CES 2007 has more than 2700 exhibitors; yet "Linux" appears in text associated with just 3 companies: Interact-TV, Neuros Technology and Pixel Magic Systems. Yet it is clearer than ever that Linux has become the bedrock on which more and more companies build their solutions."

Comments (none posted)

Fun and sun down under: Day one at Linux.conf.au (Linux.com)

Joe 'Zonker' Brockmeier joins LWN editor Jon Corbet and many others for fun and sun down under. "It took more than 17 hours in planes and a trip through customs, but I've made the trek from Denver, Colorado, to Sydney, Australia, for Linux.conf.au (LCA) 2007. Already it looks like the trip was worthwhile. Linux.conf.au (or "Linux.con f.au," as it says on our misprinted hats) is a roving conference held annually in different locations around Australia. This year, the conference returned to Sydney at the Kensington campus of the University of New South Wales, where the first Linux.conf.au was held." The LWN weekly edition for January 18th will feature more articles fresh from linux.conf.au.

Comments (none posted)

Companies

Sun's Fortran replacement goes open source (ZDNet)

ZDNet reports that Sun Microsystems is releasing Fortress, a replacement for the FORTRAN language with parallel programming capabilities, under the BSD license. "Sun Microsystems took a new open-source step this week, enlisting the outside world's help in an attempt to create a brand-new programming language called Fortress. On Tuesday, the company quietly released as open-source software a prototype Fortress "interpreter," a programming tool to execute Fortress programs line by line. "We're trying to engage academics and other third parties," said Eric Allen, a Sun Labs computer scientist and Fortress project leader, about the open-source move."

Comments (7 posted)

Sun to release OpenSolaris under GPL version 3 (Linux-Watch)

Linux-Watch reports that Sun is going to add the GNU General Public License version 3 to OpenSolaris in addition to its current CDDL. "This will enable programmers to share code among OpenSolaris and other GPLv3 open-source software projects. While it still looks very doubtful that Linux will go GPLv3, we can be certain that the Free Software Foundation Gnu Project's 5,000 plus programs will be available under the GPLv3. In addition, the Samba Team has announced that it will be making its popular Samba CIFS (Common Internet File System) software GPLv3."

Comments (40 posted)

Linux Adoption

EU Commission Study Finds You'll Save Money Switching to FOSS (Groklaw)

Groklaw looks into a recent study [PDF] by the European Union entitled "Study on the Economic impact of open source software on innovation and the competitiveness of the Information and Communication Technologies (ICT) sector in the EU". "I thought you'd be interested in the conclusion regarding total cost of ownership. Is it true that switching to Open Source will cost you more than staying with Windows, as Microsoft's "Get the Facts" page claims? No. The study found: "Our findings show that, in almost all the cases, a transition toward open source reports of savings on the long term – costs of ownership of the software products." But what about training costs? Doesn't that remove the benefits? No, the report found: "Costs to migrate to an open solution are relevant and an organization needs to consider an extra effort for this. However these costs are temporary and mainly are budgeted in less than one year." So there you are."

Comments (1 posted)

Large academic international interdisciplinary study on FLOSS gets the real facts (LXer)

Hans Kwint looks at a final draft of a study on the economic / innovative impacts of Free and Open Source Software. "The European Commission's enterprise and industry department just released the final draft of what could be the biggest academic interdisciplinary study on the economic / innovative impacts of FLOSS*. The study was done by an international consortium, led by the United Nations University / University of Maastricht's (NL,EU) department of innovation; UNU-MERIT for short. The study was prepared by senior researcher Rishab Aiyer Ghosh, who did a tremendous amount of FLOSS studies the last few years, amongst them on FLOSSpols and FLOSSWorld."

Comments (none posted)

Legal

BSD - The Dark Horse of Open Source, by Brendan Scott, OS Law (Groklaw)

Groklaw presents a paper by Brendan Scott on the BSD license. "Brendan Scott has been studying the BSD license, particularly in the context of Australian law, and he has come up with some startling questions. Is the BSD license as permissive as we've thought? The paper is principally for lawyers to consider, but it's certainly of interest to everyone, and note his disclaimer: Nothing in this paper is legal advice or a statement of the law. This paper is an exposition of an (untested) argument as to the effect of the BSD license."

Comments (10 posted)

Interviews

Talking virtualization with rPath (Linux.com)

Linux.com talks with Brett Adams, vice president of development at rPath. "Brett Adams, vice president of development at rPath, sees 2007 as a pivotal year for virtualization. When you are looking at the future of virtualization, few companies are as well positioned to make observations as rPath. Billing itself as the "software appliance company," rPath was one of the first companies to focus on virtual appliances and simplifying their production."

Comments (none posted)

UML maintainer Jeff Dike makes virtualization predictions (Linux.com)

Joe 'Zonker' Brockmeier talks with User Mode Linux maintainer Jeff Dike. "One of the great things about Linux.conf.au is the chance to mingle with some of the brightest lights in the open source community. For example, Jeff Dike, author and maintainer of User-Mode Linux is here this week to talk about UML and the Kernel-based Virtual Machine (KVM). During one of the breaks on Monday, I sat down with Dike to talk about UML's immediate future, and picked his brain about other virtualization technologies."

Comments (none posted)

Resources

How do I find out Linux CPU utilization? (nixCraft)

nixCraft presents a tutorial on monitoring CPU utilization under Linux. "Whenever a Linux system CPU is occupied by a process, it is unavailable for processing other requests. Rest of pending requests must wait till CPU is free. This becomes a bottleneck in the system. Following command will help you to identify CPU utilization, so that you can troubleshoot CPU related performance problems. Finding CPU utilization is one of the important tasks. Linux comes with various utilities to report CPU utilization."

Comments (none posted)

Handicapping New DNS Extensions and Applications (O'ReillyNet)

Cricket Liu discusses DNS extensions on O'Reilly. "The DNS system is not static; there are several proposed new extensions and applications under development and adoption. DNS expert Cricket Liu explores five for updates and their future: the Sender Policy Framework, IPv6 support, Internationalized Domain Names, ENUM, and the DNS Security Extensions."

Comments (4 posted)

Flash Player 9 For Linux (x86)

Nicola Soranzo sent in a link to this Adobe blog post: "This is it. This is the officially blessed version of the Adobe Flash Player 9 for Linux (x86). Not a beta version; the final version. It's released. Today." You can download a tar.gz or .rpm from here.

HowtoForge has an article on installing the new native Linux Flash Player 9 from Adobe on an Ubuntu Edgy Eft desktop.

Comments (16 posted)

Back Up/Restore Hard Drives And Partitions With Ghost4Linux (HowtoForge)

HowtoForge presents a tutorial on Ghost4Linux. "This tutorial shows how you can back up and restore hard drives and partitions with Ghost4Linux. Ghost4Linux is a Linux Live-CD that you insert into your computer; it contains hard disk and partition imaging and cloning tools similar to Norton Ghost. The created images are compressed and transferred to an FTP server instead of cloning locally."

Comments (none posted)

Whistle while you work to run commands on your computer (IBM developerWorks)

Nathan Harrington shows how to control a computer with whistling in an IBM developerWorks article. "Use Linux® or Microsoft® Windows®, the open source sndpeek program, and a simple Perl script to read specific sequences of tonal events -- literally whistling, humming, or singing at your computer -- and run commands based on those tones. Give your computer a short low whistle to check your e-mail or unlock your your screensaver with the opening bars of Beethoven's Fifth Symphony. Whistle while you work for higher efficiency."

Comments (3 posted)

Delete Qmail Server messages Queue (Debian Admin)

Debian Admin has published a tutorial on managing the Qmail MTA message queue. "Occasionally, viruses will get past scanners before the signatures get updated; if they exist in large numbers, it is often practical to stop the Qmail install briefly in order to clean out all messages containing a signature related to the virus. Whatever the reason to pull items from your mail queue, this program will delete them in such a manner that will let you restore them easily."

Comments (none posted)

How To Automatically Scan Uploaded Files For Viruses With php-clamavlib (HowtoForge)

HowtoForge presents a tutorial on scanning uploaded files for viruses with php-clamavlib. "This guide describes how you can automatically scan files uploaded by users through a web form on your server using PHP and ClamAV. That way you can make sure that your upload form will not be abused to distribute malware. To glue PHP and ClamAV, we install the package php5-clamavlib/php4-clamavlib which is rather undocumented at this time. That package is available for Debian Etch and Sid and also for Ubuntu Dapper Drake and Edgy Eft, so make sure you use one of these platforms."

Comments (none posted)

Reviews

Desktop Virtualization with VMware Player and Workstation (Linux.com)

Linux.com reviews VMware Player and Workstation. "More and more organizations are consolidating physical hardware using virtualization. But virtualization technology and tools aren't limited to big-dollar corporations. With the free-as-in-beer VMware Player, and the very cheap VMware Workstation, you too can use this fancy technology to utilize the processing horsepower of cheap multi-core hardware available off-the-shelf."

Comments (3 posted)

Tiny Linux computer aims at terminal/kiosk users (LinuxWorld)

LinuxWorld takes a look at the tiny Linutop computer. "The gadget, about the size of a portable CD player, is an embedded Linux PC without a hard disk. It has four USB2 ports, for connecting a mouse, PC, keyboard and other devices (such as a USB Wi-Fi adapter, storage stick, etc.), and an integrated 100Mbps Ethernet port and VGA video output. Microphone and headphone jacks are also built in. The box itself runs AMD’s embedded Geode processor, with 256MB of memory, and 512MB of ROM, for storing the operating system image and applications."

Comments (5 posted)

New Linux development tools for PS3 (PRO-G)

PRO-G covers the PS3 RapidMind Development Platform. "It has been announced that RapidMind and Terra Soft have teamed up to make application development for the PlayStation 3 easier than ever before. Last month Terra Soft announced the release of Yellow Dog Linux of the PlayStation 3, and now with the RapidMind Development Platform, developers can more easily create applications that run on PS3 and other hardware which utilizes the Cell Broadband Engine."

Comments (none posted)

Innotek makes virtualization software available as open source (Linux.com)

Linux.com looks at Innotek's VirtualBox Open Source Edition. "Germany's InnoTek Systemberatung GmbH started out by supporting enterprises and financial institutions that were running IBM infrastructure. "As many of these enterprises were running outdated solutions such as OS/2, but cannot simply replace such huge infrastructures with the snap of a finger, virtualization was a natural solution to them," says Achim Hasenmueller, general manager of InnoTek. Hasenmueller adds that his company has been in the virtualization business for a long time and has also contributed substantial parts to what is now Microsoft Virtual PC. "Today we staff the largest group of virtualization experts in Europe," he says."

Comments (5 posted)

Miscellaneous

Browser Based EMR's Threatens Software Freedom (LinuxMedNews)

LinuxMedNews reports on the loss of customer control over browser-based Electronic Medical Record systems. "The age of the all-browser based Electronic Medical Record/Electronic Health Record (EMR/EHR) is upon us. Local area network (LAN) based EMR's upon which older generation EMR's companies have built their products is dead. This paradigm shift is occurring now. This development threatens Free and Open Source medical software, practitioners and patients as they have never been threatened before."

Comments (5 posted)

Java IDEs make nice: Eclipse joins JCP (Linux-Watch)

Linux-Watch covers the joining of the Java Community Process by the Eclipse Foundation. "Maybe cats and dogs can live together, after all. Sources close to the matter revealed today that the Eclipse Foundation has joined the Java Community Process (JCP). A quick check of the JCP membership list reveals that the Foundation is listed as a member. The sources also said that Eclipse joined the JCP this week, and that the formal announcement is scheduled for next week. Historically, the two development groups have not worked or played well with each other. Both, however, have had a common goal: an open-source, inexpensive Java IDE (integrated development environment) that can be used on multiple platforms to produce programs for various computer architectures."

Comments (none posted)

Want To Buy a $100 Linux Laptop? (NewsFactor Network)

NewsFactor Network reports on a new funding scheme for the OLPC project. "The nonprofit group that hopes to bring inexpensive laptops to poor kids around the world is now considering the possibility of allowing the $100 machines to be purchased by the general public. The backers of the One Laptop Per Child (OLPC) project haven't suddenly been bitten by the capitalist bug, but rather have come up with a way to offer the computers to the general public while increasing their availability to school children in developing nations. According to one plan being considered, the computers would be offered to customers who would have to purchase a minimum of two laptops at a time -- with the second going to the developing world."

Comments (none posted)

ISP offers students cash for open source code (ZDNet)

ZDNet reports that the UK Free Software Network (UKFSN) is offering to pay students for open source code. "As an incentive to get students to push the code boundaries of open source software, a British software network is offering cash for fresh code, reports Welsh IT News Online. The UK Free Software Network (UKFSN), a small Hertfordshire-based Internet service provider, conceived the idea to encourage students to develop software that can be modified by its end users."

Comments (none posted)

Page editor: Forrest Cook

Announcements

Non-Commercial announcements

Ardour project needs new sponsors

The Ardour multi-track digital audio workstation project has lost its primary funding source. "Ardour is looking for new project sponsors after our major contributor, Solid State Logic informed us that a conflict of interest prevents them from continuing the financial support they gave to Ardour during 2006. Ardour is at an exciting point in its development: version 2.0 is about to released very shortly, and more and more developers have recently started contributing to the project. Once version 2.0 is released, we have a large list of substantial new features waiting to be added, including MIDI recording and playback (already implemented as part of our participation in Google's Summer of Code)."

Full Story (comments: none)

Commercial announcements

Autodesk Maya 8.5 3D animation software announced

Autodesk, Inc. has announced the release of Autodesk Maya 8.5, a 3D animation, modeling and rendering system. "Now shipping, Maya 8.5 gives artists enhanced creative control, enabling faster completion of complex animations and simulations. "Autodesk is committed to making Maya the foundation for modern production pipelines. Maya 8.5 supports industry-standard Python scripting, offering improved workflows and development productivity," said Marc Petit, Autodesk's Media & Entertainment vice president."

Comments (none posted)

Columbitech Wireless VPN Adds Linux Server Support

Mobile security solutions provider Columbitech has announced support for SUSE Linux Enterprise Server 9 and 10. This new offering combining Columbitech Wireless VPN with SUSE Linux Enterprise Server enables customers to automatically set up any computer to be a VPN server and firewall within minutes.

Full Story (comments: none)

Empower Technologies announces digital media tablet reference design platform

Empower Technologies has announced a new Digital Media Tablet (DMT) reference design platform. "This new DMT reference design is specifically targeted to developers, value added integrators and manufacturers innovating portable digital media devices for the automotive, RFID and consumer electronics industries. Each of these industries require high performance codec computation and control operation such as audio/video streaming/recording; high speed wireless communication such as WiFi, CDMA, and GPRS; and data processing such as data acquisition, analysis and control."

Comments (none posted)

Fluendo announces Windows Media and MPEG codec support for Linux

Fluendo has announced the availability of new CODECs for Linux and Solaris. "Users of GNU/Linux and Solaris operating systems have previously lacked solutions which enabled them to license and use popular media formats such as Windows Media, MPEG-2 and MPEG-4 in accordance with the laws of their country. Through Fluendo's agreements with Microsoft and MPEG LA such a solution is now available. By closely integrating with the GStreamer multimedia framework, Fluendo's new plugins enable support for these widely used codecs in popular GNU/Linux and Solaris applications such as Totem Video Player, Rhythmbox music player, Banshee Music player, Elisa Media Center and the Jokosher sound editor."

Full Story (comments: 21)

Fujitsu and Hopling Technologies to Ship Linux WiMAX Board Packages

Fujitsu Microelectronics America, Inc. and Hopling Technologies have announced the availability of their jointly-produced Linux-based WiMAX baseband system-on-chip (SoC) reference kits. ""Hopling Technologies is deeply committed to helping businesses across the world realize the benefit and full potential of WiMAX," said Ivo van Ling, Chief Technology Officer of Hopling Technologies. "This joint Linux project with Fujitsu is an extremely important project and it's a milestone in Hopling Technologies' Linux-based WiMAX product strategy that will allow companies to roll out seamless broadband wireless systems.""

Comments (none posted)

GoDaddy.com announces Metropolis hosting community

GoDaddy.com has announced the Metropolis hosting community. "The new hosting community allows anyone with a Go Daddy(R) Web hosting account to install, manage, rate and review third-party hosted applications, offering a personal touch in the big-city hosting world. At launch, Metropolis offers more than 30 free software titles for immediate installation to Linux(R) or Windows(R) based hosting accounts. From blogs and forums to galleries and wikis, users learn about software options from other community members and share their experience with reviews and ratings."

Comments (none posted)

TIBCO BusinessWorks 5.4 to add support for BPEL

TIBCO Software Inc. has announced that version 5.4 of its TIBCO BusinessWorks software will add support for version 1.1 of the OASIS Web Services Business Process Execution Language. "The increasing adoption of SOA by enterprises is creating a complex network of discrete services. As companies break down their monolithic applications and create new services, their end goal is to compose and reuse those services as processes or new applications. TIBCO BusinessWorks with support for BPEL will further help companies simplify the re-use of existing IT assets through a more manageable, adaptive and cost-effective infrastructure."

Comments (none posted)

Xilinx releases new ISE software for FPGA design

Xilinx, Inc. has announced version 9.1i of its Xilinx Integrated Software Environment. "ISE Foundation(TM) 9.1i suite is immediately available with prices starting at US $2,495. A full-featured 60-day evaluation version is available at no charge. All versions of ISE 9.1i software packages support Windows(R) 2000 and Windows XP Professional and Linux(R) Red Hat(R) Enterprise 3.0 and 4.0. ISE Foundation also supports Solaris(R) 2.8 and 2.9."

Comments (none posted)

New Books

Backup and Recovery - O'Reilly's Latest Release

O'Reilly has published the book Backup & Recovery by W. Curtis Preston.

Full Story (comments: none)

Dynamic HTML: The Definitive Reference, Third Ed. - New from O'Reilly

O'Reilly has published the book Dynamic HTML: The Definitive Reference, Third Edition by Danny Goodman.

Full Story (comments: none)

Resources

FSFE Newsletter

The January 12, 2007 edition of the Free Software Foundation Europe newsletter is online with the latest FSFE news. Topics include: Looking back and forward, Georg Greve at "Nexell informiert" and Get Active: Join the Fellowship!

Full Story (comments: none)

Linux Documentation Project Weekly News

The December 28, 2006 edition of the Linux Documentation Project Weekly News is online with the latest documentation updates.

Full Story (comments: none)

Calls for Presentations

Akademy 2007 Call for Participation (KDE.News)

KDE.News has announced the Akademy 2007 call for participation. "The KDE community is getting ready to set a major milestone for the free desktop with the upcoming release of KDE 4. This will mark a new level of user experience, technical excellence in the framework and opportunities for free software on the desktop. The KDE contributors conference, which is part of Akademy, the world summit of the KDE community, will be the place to present the newest developments, long-term strategies or interesting input from the surrounding communities, projects and societies. Be part of it, present your thoughts, ideas and work at Akademy 2007 in Glasgow, Scotland." The event takes place from June 30 - July 7, 2007, abstracts are due by February 14, 2007.

Comments (none posted)

LayerOne 2007 CFP Announced

A call for papers has gone out for LayerOne 2007. "What is LayerOne? Currently in its 4th year, LayerOne is computer security and technology conference held in the Los Angeles area. The purpose of LayerOne is to bring together the many different types of folks who make up the security community for a 2 day discussion of the technologies that impact our professional and personal lives." The event takes place on May 5-6, 2007, submissions are due by March 31.

Full Story (comments: none)

Calling All Innovators to the 2007 O'Reilly OSCON

A call for participation has gone out for the 2007 O'Reilly Open Source Convention. "This year, the program will focus on the progress and innovation that open source movers and shakers are contributing to the computing industry. Program chairs will be looking for proposals that convey real-world scenarios using open source, and the new tools and ideas that will help participants be more productive or write better code. OSCON will return to the Oregon Convention Center in Portland from July 23-27, 2007. The Call for Participation deadline is February 5, 2007."

Full Story (comments: none)

2007 Ottawa Linux Symposium announcements

A few announcements have gone out regarding the 2007 Ottawa Linux Symposium. The event will be happening earlier this year: June 27 to 30. The CFP is currently open, with submissions due by February 5. "As mentioned in the CFP, we are expanding the scope of the topics this year and strongly encourage you to submit papers on any and all leading edge user space development, and advanced system administration."

Full Story (comments: none)

Upcoming Events

Discover the Magic of Technology at O'Reilly ETech 2007

The O'Reilly ETech 2007 conference will take place in San Diego, CA on March 26-29. "Spanning the gamut of tech from the infrastructure supporting mass-market players, to the promise of alternative energy sources, ETech will examine what's current while keeping an eye firmly trained on what's coming."

Full Story (comments: none)

KDE-NL New Year's Meeting Coming Up (KDE.News)

KDE.News has announced The KDE-NL New Year's Meeting. "On Saturday, 20th January, the traditional KDE-NL New Year's Meeting will be held in Lent near Nijmegen in the eastern part of the country. KDE-NL invites contributors, interested users and other affiliated people for the day to get to know each other in person and discuss all kinds of KDE-related things."

Comments (none posted)

Nepomuk-KDE Workshop, Paris

The Nepomuk-KDE Workshop will take place at the Mandriva office in Paris, France on February 1-2, 2007. "It is time to present the Nepomuk-KDE project to the world in a more hands-on manner. A move to kdelibs of the core parts of Nepomuk-KDE is planned and even more so raises the need for information." See this KDE.News article for more information.

Full Story (comments: none)

SCALE preparation continues

The SCALE conference is coming soon, discount tickets are still available. "The final touches are being put on the Fifth Annual So Cal Linux Expo, to be held February 9th to 11th in Los Angeles. All speaker slots are full, and all exhibitor booths have been filled. The seven speaker slots in the Women In Open Source conference have been filled and its speaker panel is being finalized."

Full Story (comments: none)

Open Source Health Care Summit Schedule Announced (LinuxMedNews)

LinuxMedNews has announced the schedule for the Open Source Health Care Summit. "Speakers will include Fred Trotter (GPL Medicine), Scott Shreeve, Eishay Smith (IBM), and Gerald Bortis (Mirth Project). The Open Source Health Care Summit will be held on Feb 9, 2007 as part of SCALE 5x." The event takes place at the Los Angeles, CA airport Weston hotel.

Comments (none posted)

Events: January 25, 2007 to March 26, 2007

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
January 20
January 26
Cell Hack-a-thon Loveland, CO, USA
January 23
January 26
Open Source Meets Business Nürnberg, Germany
January 30
February 1
Solutions Linux Expo Paris, France
February 1
February 2
LinuxDays Luxembourg Luxembourg, Luxembourg
February 2 FUDCon Boston 2007 Boston, MA, USA
February 7
February 9
Free Software World Conference 3.0 Badajoz, Spain
February 7
February 9
Xorg Developer's Conference Santa Clara, CA, USA
February 9 Women In Open Source Los Angeles, USA
February 9 Open Source Health Care Summit Los Angeles, USA
February 10
February 11
2007 Southern California Linux Expo Los Angeles, USA
February 12
February 13
Vancouver PHP Conference Vancouver, BC, Canada
February 12
February 13
Linux Storage and Filesystem Workshop San Jose, CA, USA
February 12
February 16
Ruby on Rails Bootcamp Training Atlanta, USA
February 12
February 15
3GSM World Congress 2007 Barcelona, Spain
February 14
February 15
LinuxWorld OpenSolutions Summit New York, NY, USA
February 15 TiE Open Source Summit Pittsburgh, PA, USA
February 16 The Ubucon New York New York, NY, USA
February 19
February 23
DebianEDU DevCamp Soissons, France
February 22 PyCon Tutorial Day Addison, Texas
February 22 CELF Japan Linux Technical Jamboree #13 Tokyo, Japan
February 22
February 24
OpenMind 2007 San Giorgio a Cremano, Naples, Italy
February 23
February 25
PyCon 2007 Addison, Texas
February 23 PHP Conference UK 2007 London, England
February 24
February 25
Free and Open Source Software Developers' European Meeting Brussels, Belgium
February 24
February 25
Java/DevJam/2007/Fosdem Brussels, Belgium
February 26
March 1
PyCon Sprints Addison, Texas
February 26
March 2
PHP5 Bootcamp Training at the Big Nerd Ranch Atlanta, Georgia, USA
February 27
March 1
O'Reilly Emerging Telephony Conference San Francisco, CA
February 27
March 2
EUSecWest Applied Security Conference London, UK
February 28
March 2
Network and Distributed System Security Symposium San Diego, CA, USA
March 2
March 3
LinuxForum 2007 Copenhagen, Denmark
March 3
March 8
O'Reilly Emerging Technology Conference San Diego, CA, USA
March 5
March 8
EclipseCon 2007 Santa Clara, CA, USA
March 5
March 6
Karlsruhe Workshop on Software Radios Karlsruhe, Germany
March 8
March 10
2007 Open Source Think Tank Napa, CA, USA
March 10
March 13
Camp 5 Advanced Zope3 Training Charlotte, North Carolina, USA
March 12
March 16
QCon London, England
March 12
March 16
Third Annual Security Enhanced Linux Symposium Baltimore, US
March 12
March 14
BOSSA Conference Porto de Galinhas, Brazil
March 13
March 14
The Linux Foundation Japan Symposium Tokyo, Japan
March 14
March 16
PHP Quebec Conference Montreal, Canada
March 14
March 17
Barbeque Sprint for Plone3 Charlotte, North Carolina, USA
March 15
March 21
CeBIT computer fair Hannover, Germany
March 16
March 17
MountainWest RubyConf Salt Lake City, USA
March 18
March 23
Novell BrainShare 2007 Salt Lake City, Utah, USA
March 19
March 21
UKUUG LISA/Spring Conference 2007 Manchester, UK
March 22
March 25
Linux Audio Conference Berlin, Germany
March 23
March 25
ShmooCon Washington DC, USA
March 23
March 25
Guademy Coruña, Spain
March 24 FSF Associate Membership Meeting Cambridge, MA, USA

If your event does not appear here, please tell us about it.

Web sites

Searchme Launches Wikiseek Search Engine for Wikipedia

Searchme, Inc. has announced the launch of Wikiseek, an improved search engine for the Wikipedia reference site. "Wikiseek is available as a destination web site as well as inside Wikipedia as a Firefox extension. Wikiseek is based on proprietary technology developed by Searchme, which utilizes the suggestions of tens of thousands of vertical search engines to deliver more highly relevant searches. The result is a faster, richer Wikipedia search experience."

Comments (none posted)

Audio and Video programs

Web 2.0 Podcast: A Debate on Net Neutrality (O'ReillyNet)

O'Reilly presents a podcast on network neutrality. "Web 2.0 Summit program chair John Battelle moderated a debate on net neutrality between Google VP and chief internet evangelist Vinton Cerf and Robert Pepper, who leads a team driving Cisco's global agenda for advanced technology policy. This episode is sponsored by the Intel Software Network."

Comments (none posted)

Page editor: Forrest Cook


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds