User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for November 25, 2010

On breaking things

By Jonathan Corbet
November 24, 2010
Our systems run a complex mix of software which is the product of many different development projects. It is inevitable that, occasionally, a change to one part of the system will cause things to break elsewhere, at least for some users. How we respond to these incidents has a significant effect on the perceived quality of the platform as a whole and on its usability. Two recent events demonstrate two different responses - but not, necessarily, a clear correct path.

The two events in question are these:

  • An optimization applied to glibc changed the implementation of memcpy(), breaking a number of programs in the process. In particular, the proprietary Flash plugin, which, contrary to the specification, uses memcpy() to copy overlapping regions, is no longer able to play clear audio for some kinds of media.

  • A change in the default protections for /proc/kallsyms, merged for the 2.6.37 kernel, was found to cause certain older distributions to fail to boot. The root cause is apparently a bug in klogd, which does not properly handle a failure to open the symbol file.

In summary, we have two changes, both of which were intended to improve the behavior of the system - better performance, in the glibc case, and better security for /proc/kallsyms. In each case, the change caused other code which was buggy - but which had been working - to break. What came thereafter differed considerably, though.

In the glibc case, the problem has been experienced by users of Fedora 14, which is one of the first distributions to ship the new memcpy() implementation. Given that code using glibc has been rendered non-working by this change, one might reasonably wonder if the glibc developers have considered reverting it. As far as your editor can tell, though, nobody has even asked them; the developers of that project have built a reputation for a lack of sympathy in such situations. They would almost certainly answer that the bug is in the users of memcpy() who, for whatever reason, ignored the longstanding rule that the source and destination arrays cannot overlap. It is those users who should be fixed, not the C library.

The Fedora project, too, is in a position to revert the change. The idea was discussed at length on the fedora-devel mailing list, but the project has, so far, taken no such action. At this level, there is a clear tension between those who want to provide the best possible user experience (which includes a working Flash player) in the short term, and those who feel that allowing this kind of regression to hold back a performance improvement is bad for the best possible user experience in the longer term. According to the latter group, reverting the change would slow things down for working programs and relieve the pressure on Adobe to fix its bug. It is better, they say, for affected users to apply a workaround and complain to Adobe. That view appears to have carried the day.

In the /proc/kallsyms case, the change was reverted; an explicit choice was made to forgo a potential security improvement to avoid breaking older distributions. This decision has been somewhat controversial, both on the kernel mailing list and here on LWN. The affected distribution (Ubuntu 9.04) is relatively old; its remaining users are unlikely to put current kernels on it. So a number of voices were heard to say that, in this case, it is better to have the security improvement than compatibility with older distributions.

Linus was clear about his policy, though:

The rule is not "we don't break non-buggy user space" or "we don't break reasonable user-space". The rule is simply "we don't break user-space". Even if the breakage is totally incidental, that doesn't help the _user_. It's still breakage.

The kernel's record with regard to this rule is, needless to say, not perfect, but that record as a whole is quite good; that has served the kernel well. It is usually possible to run current kernels on very old distributions, allowing users to gain new hardware support and features, or simply to help with testing. It forms a sort of contract with the kernel's users which gives them some assurance that new releases will not cause their systems to break. And, importantly, it helps the kernel developers to keep overall kernel quality high; if you do not allow once-working things to break, you can be at least somewhat sure that the quality of the kernel is not declining over time. Once you start allowing some cases to break, you can never be sure.

There is probably little chance of a kernel-style "no regressions" rule being universally adopted. Even in current kernels, the interface to the rest of the system is relatively narrow; the system as a whole has a much larger range of things that can break. It is a challenge to keep new kernel releases from causing problems with existing applications; for a full distribution, it's perhaps an insurmountable challenge. That is part of why companies pay a lot of money for distributions which almost never make new releases.

Some kinds of regressions are also seen as being tolerable, if not actively desirable. There has never been any real sympathy for broken proprietary graphics drivers, for example. The proprietary nature of the Flash plugin will not have helped in this case either; it is irritating to know exactly how to fix a problem, but to be unable to actually apply that fix. Any free program affected by this bug would, if anybody cared about it at all, have been fixed long ago. Flash users, meanwhile, are still waiting for Adobe to change a memcpy() call to memmove(). One could certainly argue that holding Adobe responsible for its bug - and, at the same time, demonstrating the problems that come with proprietary programs - is the right thing to do.

On the other hand, one could argue that breaking Flash is a good way to demonstrate to users that they should be using a different distribution - or another operating system entirely. Your editor would suggest that perfection with regard to regressions is not achievable, but it still behooves us to try for it when we can. There is a lot to be said for creating a sense of confidence that software updates are a safe thing to apply. It will make it easier to run newer, better software, inspire users to test new code, and, maybe, even bring some vendors closer to upstream. We should make a point of keeping things from breaking, even when the bugs are not our fault.

Comments (55 posted)

Reports of procmail's death are not terribly exaggerated

November 24, 2010

This article was contributed by Nathan Willis

The mail delivery agent (MDA) procmail is a Linux and Unix mainstay; for years it has been the recommended solution for sorting large volume email and filtering out spam. The trouble is that it is dead, and it has been for close to a decade. Or at least that may be the problem, depending on how you look at it. The question of when (or if) to declare an open source project dead does not have a clear answer, and many people still use procmail to process email on high-capacity systems.

For those unfamiliar with it, MDAs like procmail receive incoming mail from mail transport agents (MTAs) like Sendmail or Postfix, then process the received messages according to user-defined "recipes." Recipes examine the headers and body of messages, and are usually used to sort email to different mailboxes, forward messages to different addresses, and perhaps most importantly, to recognize and dispose of spam — often by triggering an external spam filtering tool like SpamAssassin. Recipes can also modify messages themselves, such as to truncate dangerously long message bodies or abbreviate irritatingly-long recipient lists.

Officially, the last stable procmail release was version 3.22, made in September of 2001. As one might expect, there has never been an official "the project is dead" announcement. Instead, only circumstantial evidence exists. Although several of the FTP mirrors include what appear to be development "snapshot" packages as recent as November of 2001, there does not appear to have been any substantial work since that time. The developers' mailing list has hardly seen a non-spam blip since 2003.

A side effect of a project abandoned that long ago is that there was no web-based source code repository at the time, even though such repositories are a fixture today, so only the tarballed releases uploaded to the FTP or HTTP download sites exist for FOSS archaeologists to examine. Similarly, a great many of the links on the official project page, including mailing list archives, external FAQ pages, and download mirrors, have succumbed to link-rot over the years and no longer provide access to useful information for those just getting started.

I'm not dead yet

Despite all this, procmail still has a loyal following. The procmail users' mailing list is actually quite active, with most of the traffic focusing on helping administrators maintain procmail installations and write or debug recipes. Reportedly, many of today's current procmail users are Internet service providers (ISPs), who naturally have an interest in maintaining their existing mail delivery tool set.

procmail's defenders usually cite its small size and its steady reliability as reasons not to abandon the package. A discussion popped up on the openSUSE mailing list in mid-November about whether or not the distribution should stop packaging procmail; Stefan Seyfried replied by saying that rather than dying ten years ago, the program was "finished" ten years ago:

[...] it is feature complete and apparently pretty bugfree. It seems that even the last five years of compiler improvements in detecting overflows and such did not uncover flaws in procmail, which I personally think is pretty impressive.

In a similar vein, when Robert Holtzman asked on the procmail users' list whether or not the project was abandoned, Christopher L. Barnard replied "It works, so why mess with it? It does what in needs, no more development is needed..."

But there are risks inherent in running abandonware, even if it was of stellar quality at the last major release. First and foremost are unfixed security flaws. Mitre.org lists two vulnerabilities affecting procmail since 2001: CVE-2002-2034, which allows remote attackers to bypass the filter and execute arbitrary code by way of specially-crafted MIME attachments, and CVE-2006-5449, which uses a procmail exploit to gain access to the Horde application framework. In addition, of course, there are other bugs that remain unfixed. Matthew G. Saroff pointed out one long-standing bug, and the procmail site itself lists a dozen or so known bugs as of 2001.

Just as importantly, the email landscape and the system administration marketplace have not stood still since 2001, either. Ed Blackman noted that procmail cannot correctly handle MIME headers adhering to RFC 2047 (which include non-ASCII text), despite the fact that RFC 2047 dates back to 1996. RFC 2047-formatted headers are far from mandatory, but they do continue to rise in frequency.

Bart Schaefer notes that every now and then, someone floats the possibility of a new maintainer stepping up — but no one ever actually does so. Regardless of the theoretical questions about whether there are unfixed bugs, surely that practical reality provides the answer no one can arrive at by other logic: if no one works on the code, and no one is willing to work on the code, then surely it can be called abandoned.

What's a simple procmail veteran to do?

The most often-recommended replacement for procmail is Maildrop, an application developed by the Courier MTA project. Like procmail, Maildrop reads incoming mail on standard input and is intended to be called by the MTA, not run directly. It also requires the user to write message filters in a regular-expression-like language, but it reportedly uses an easier-to-read (and thus, easier-to-write) syntax.

The project also advertises several feature and security improvements over procmail, such as copying large messages to a temporary file before filtering them, as opposed to loading them into memory. Maildrop can also deliver messages to maildir mailboxes as well as to mbox mailboxes; procmail natively supports just mbox, although it can be patched (as distributions seem to have done) or use an external program to deliver to maildir mailboxes.

The merits of the competing filter-writing syntaxes are a bit subjective, but it is easy to see that procmail's recipe syntax is more terse, using non-alphabetic characters and absolute positioning in place of keywords like "if" and "to." For example, the Maildrop documentation provides some simple filter rules, such as this filter that is triggered by the sender address boss@domain.com and includes the string "project status" somewhere in the Subject line:

    if (/^From: *boss@domain\.com/ \ 
	&& /^Subject:.*[:wbreak:]project status[:wbreak:]/)
    {
	cc "!john"
	to Mail/project
    }

The action enclosed in curly braces routes the message to the Mail/project folder, and forwards a copy of the message to the user "john." An equivalent in procmail's recipe language might look like this instead:

    :0:
    * ^From.*boss@domain\.com
    * ^Subject:.*(project status)
    ${DEFAULT}/project
    ! john@domain.com

The first line specifies that this is a new recipe; the trailing colon tells procmail to lock the mail file, which is necessary when saving the message to disk. The asterisks and exclamation point that begin lines are operators indicating new "conditions" and the forwarding action, respectively — neither is part of a regular expression. As you can see, the Maildrop syntax is not noticeably longer, but it could be easier to mentally parse late at night — particularly if reading filters written by someone else. Regrettably there does not seem to be an active project to automatically convert procmail recipes to Maildrop filters, which means switching between the packages requires revisiting and rewriting the rules.

Maildrop is not the only actively maintained MDA capable of filling in for procmail, although it is the easiest to switch to, by virtue of running as a standard-in process. Dovecot's Local Delivery Agent (LDA) module, for instance, has a plugin that allows administrators to write filtering rules in the Sieve language (RFC 5228). Maildrop has an advantage over LDA, though, in that in addition to Courier, it is also designed to work with the Qmail and Postfix MTAs.

If you are currently running procmail without any trouble, then there is certainly no great need to abandon it and switch to Maildrop or any other competitor. OpenSUSE, for its part, eventually concluded that there was no reason to stop packaging procmail, for the very reasons outlined above: it works, and people are still using it. However, ten years is a worryingly long time to go without an update. The simple fact that there are only two CVEs related to procmail since its last release is in no way a guarantee that it is exploit- or remote-exploit-free. At the very least, if your mail server relies on the continued availability of procmail, now is a good time to start examining the alternatives. Lumbering undead projects can do a lot of damage when they trip and fall.

Comments (40 posted)

Impressions from the 12th Realtime Linux Workshop in Nairobi

November 19, 2010

This article was contributed by Thomas Gleixner

A rather small crowd of researchers, kernel developers and industry experts found their way to the 12th Real-Time Linux WorkShop (RTLWS) hosted at Strathmore University in Nairobi, Kenya. The small showing was not a big surprise, but it also did not make the workshop any less interesting.

After eleven workshops in Europe (Vienna, Milano, Valencia, Lille, Linz, Dresden), America (Orlando, Boston, Guadalajara) and Asia (Singapore, Lanzhou) the organization committee of the Realtime Linux workshop decided that it was time to go to Africa. The main reason for this was the numerous authors who had handed in their papers in the previous years but were not able to attend the workshop due to visa problems. Others simply were not able to attend such events due to financial constraints. So, in order to give these interested folks the opportunity to attend and to push the African FLOSS community, and of course especially the FLOSS realtime Community, Nairobi was chosen to be the first African city to host the Realtime Linux Workshop.

Kenya falls into the category of countries which seem to be completely disorganized, but very effective on the spontaneous side at the same time. As a realtime person you need to deal with very relaxed deadlines, gratuitous resource reservations and less-than-strict overall constraints, but it's always a good experience for folks from the milestone- and roadmap-driven hemisphere to be reminded that life actually goes on very well if you sit back, relax, take your time and just wait to see how things unfold.

Some of the workshop organizers arrived a few days before the conference and had adjusted enough to the local way of life so they were not taken by surprise that many of the people registered for the conference did not show up but, at the same time, unregistered attendees filled in.

Day 1

The opening session, scheduled at 9AM on Monday, started on time at 9:40, which met the already-adjusted deadline constraints perfectly well. Dr. Joseph Sevilla and deputy vice-chancellor Dr. Izael Pereira from Strathmore University and Nicholas McGuire from OSADLs Realtime Linux working group welcomed the participants. Peter Okech, the leader of the Nairobi organization team, did the introduction to the logistics.

Without further ado, Paul McKenney introduced us to the question of whether realtime applications require multicore systems. In Paul's unmistakable way he lead us through a maze of questions; only the expected quiz was missing. According to Paul, realtime systems face the same challenges as any other parallel programming problem. Parallelizing a given computation is not necessarily giving you the guarantee that things will go faster. Depending on the size of the work set, the way you split up the data set and the overhead caused by synchronization and interprocess communication, this might actually leave you very frustrated as the outcome can be significantly slower than the original, serialized approach. Paul gave the non-surprising advice that you definitely should avoid the pain and suffering of parallelizing your application if your existing serialized approach does the job already.

If you are in the unlucky position that you need to speed up your computation by parallelization, you have to be prepared to analyze the ways to split up your data set, choose one of those ways, split up your code accordingly, and figure out what happens. Your mileage may vary and you might have to lather, rinse and repeat more than once.

So that leaves you on your own, but at least there is one aspect of the problem which can be quantified. The required speedup and the number of cores available allow you to calculate the ratio between the work to be done and the communications overhead. A basic result is that you need at least N+1 cores to achieve a speedup of N, but as the number of cores increases, the ratio of communications overhead to work goes up nonlinearly, which means you have less time for work due to synchronization and communication. Larger jobs are more suitable than small ones, but, even then, it depends on the type of computation and on the ability to split up the data set in the first place. Parallelization, both within and outside of the realtime space, still seems to be an unlimited source of unsolved problems and headaches.

Paul left it to me to confuse the audience further with an introduction to the realtime preemption patch. Now admittedly the realtime preemption patch is a complex piece of software and not likely to fall into the category of realtime systems whose correctness can be verified with mathematical proof. Carsten Emde's followup talk looked at the alternative solution of monitoring such systems over a long period of time to reach a high level of confidence of correctness. There are various methods available in the kernel tracer to monitor wakeup latencies. Some of those have low-enough impact to allow long-term monitoring even on production systems. Carsten explained in depth OSADL's efforts in the realtime QA Farm. The long-term testing effort in the QA farm has improved the quality of the preempt-rt patches significantly and gives us a good insight into their behaviour across different hardware platforms and architectures.

On the more academic side, the realtime researchers from the ReTiS Lab at the Scuola Superiore Sant'Anna, Pisa, Italy looked at even more complex systems in their talk titled "Effective Realtime computing on Linux". Their main focus is on non-priority-based scheduling algorithms and their possible applications. One of the interesting aspects they looked at is resource and bandwidth guarantees for virtual machines. This is not really a realtime issue, but the base technology and scheduling theory behind it emerges from the realtime camp and might prove the usefulness of non-priority-based scheduling algorithms beyond the obvious application fields in the realtime computing space.

One of the most impressive talks on day one was the presentation of a "Distributed embedded platform" by Arnold Bett from the University of Nairobi. Arnold described an effort driven by physicists and engineers to build an extremely low-cost platform applicable to a broad range of essential needs in Kenya's households and industry. Based on a $1 Z80 microcontroller, configurable and controllable by the simplest PC running Linux, they built appliances for solar electricity, LED-based room lights and simple automation tasks in buildings and shop floors. All tools and technology around the basic control platform are based on open source technology, and both the hardware and the firmware of the platform are going to be available under a non-restrictive license. The hardware platform itself is designed to be manufactured in a very cost-effective way not requiring huge investments for the local people.

Day 2

The second day was spent with hands-on seminars about git, tracing, powerlink, rt-preempt and deadline scheduling. All sessions were attended by conference attendees and students from the local universities. In addition to the official RTLWS seminars, Nicholas McGuire gave seminars with the topics "filesystem from scratch", "application software management", "kernel build", and "packaging and customizing Debian" before and after the workshop at the University of Nairobi.

Such hands-on seminars have been held alongside most of the RTLWS workshops. From experience we know that it is often the initial resistance that stops the introduction of technologies. Proprietary solutions are presented as "easy to use", as solving problems without the need to manage the complexity of technology and without investing in the engineering capabilities of the people providing these solutions. This is and always has been an illusion or worse, a way of continued creation of dependency. People can only profit from technology when they take control of it in all aspects and when they gain the ability to express their problems and their solutions in terms of these technological capabilities. For this to happen it's not sufficient to know how to use technology. Instead it's necessary that they understand the technology and are able to manage the complexity involved. That includes mastering the task of learning and teaching technology and not "product usage". That's the intention of these hands-on seminars, and, while we have been using GNU/Linux as our vehicle to introduce core technologies, the principles go far beyond.

Day 3

The last day had a follow up talk by Peter Okech to his last year's surprising topic of inherent randomness. It was fun to see new interesting ways of exploiting the non-deterministic behavior of today's CPUs. Maybe we can get at least a seed generator for the entropy pool out of this work in the not-so-distant future.

The afternoon session was filled with an interesting panel discussion about "Open Innovation in Africa". Open Innovation is, according to Carsten Emde, a term summing up initiatives from open source to open standards with the goal of sharing non-differentiating know-how to develop common base technologies. He believes that open innovation - not only in the software area - is the best answer to the technological challenges of today and the future. Spending the collective brain power on collaborative efforts is far more worthwhile than reinventing the wheel in different and incompatible shapes and sizes all over the place.

Kamau Gachigi, Director of FabLab at the University of Nairobi, introduced the collaborative innovation efforts of FabLab. FabLabs provide access to modern technology for innovation. They began as an outreach project from MIT's Center for Bits and Atoms (CBA). While CBA works on multi-million dollar projects for next-generation fabrication technologies, FabLabs aim to provide equipment and materials in the low-digit-dollars range to gain access to state-of-the-art and innovative next-generation technologies. FabLabs have spread out from MIT all over the world, including to India and Africa, and provide a broad range of benefits from technological empowerment, technical training, localized problem solving, and high-tech business incubation to grass-roots research. Kamau showed the impressive technology work at FabLabs which is done with a very restricted budget based on collaborative efforts. FabLabs are open innovation at its best.

Alex Gakuru, Chair of ICT Consumers Association of Kenya, provided deep insight into the challenges of promoting open source solutions in Kenya. One of the examples he provided was the Kenya state program to provide access to affordable laptops to students, on whose committee he served. Alex found that it was impossible to get reasonable quotes for Linux-based machines for various reasons, ranging from the uninformed nature of committee members, through the still not-entirely-resolved corruption problem, to the massive bullying by the usual-suspect international technology corporations which want to secure their influence and grab hold of these new emerging markets. He resigned in frustration from the committee after unfruitful attempts to make progress on this matter. He is convinced that Kenya could have saved a huge amount of money if there had been a serious will to fight the mostly lobbying-driven choice of going with the "established" (best marketed solution). His resignation from this particular project did not break his enthusiasm and deep concern about consumer rights, equal opportunities and open and fair access to new technologies for all citizens.

Evans Ikua, FOSS Certification Manager at FOSSFA (Free and Open Source Software Foundation for Africa, Kenya) reported on his efforts to provide capacity building for FOSS small and medium enterprises in Africa. His main concern is to enable fair competition based on technical competence to prevent Africa being overtaken by companies which use their huge financial backings to buy themselves into the local markets.

Evans's concerns were pretty much confirmed by Joseph Sevilla, Senior Lecturer at Strathmore University, who complained about the lack of "The Open Source/Linux" company which competes with the commercial offerings of the big players. His resolution of the problem - to just give up - raised more than a few eyebrows within the panelists and the audience, though.

After the introductory talks, a lively discussion about how to apply and promote the idea of open innovation in Africa emerged, but, of course, we did not find the philosopher's stone that would bring us to a conclusive resolution. Though the panelists agreed that many of the technologies which are available in Africa have been coming in from the outside, they sometimes fit the needs and in other cases simply don't. Enabling local people to not only use but to design, develop, maintain and spread their own creative solutions to their specific problems is a key issue in developing countries. To facilitate this, they need not only access to technical solutions, but full and unrestricted control of the technological resources with which to build those solutions. Taking full control of technology is the prerequisite to effectively deploy it in the specific context - and, as the presentations showed us - Africa has its own set of challenges, many of which we simply would never have thought of. Open innovation is a key to unleash this creative potential.

Conclusions

Right after the closing session a young Kenyan researcher pulled me aside to show me a project he has been working on for quite some time. Coincidentally, this project falls into the open innovation space as well. Arthur Siro, a physicist with a strong computer science background, got tired of the fact that there is not enough material and equipment for students to get hands-on experience with interesting technology. Academic budgets are limited all over the world, but especially in a place like Kenya. At some point he noticed that an off the shelf PC contains hardware which could be used for both learning and conducting research experiments. The most interesting component is the sound card. So he started working on feeding signals into the sound card, sampling them, and feeding the samples through analytic computations like fast fourier transforms. The results can be fed to a graphic application or made available, via a simple parallel port, to external hardware. The framework is purely based on existing FOSS components and allows students to dive into this interesting technology with the cheapest PC hardware they can get their hands on. His plans go further, but he'll explain them himself soon when his project goes public.

My personal conclusion of this interesting time in Nairobi is that we really need to look out for the people who are doing the grunt work in those countries and give them any possible help we can. One thing is sure that part of this help will be to just go back there in the near future and show them that we really care. In hindsight we should have made more efforts upfront to reach out to the various groups and individuals interested in open source and open innovation, but hindsight is always easier than foresight. At least we know how to do better the next time.

On behalf of the participants and the OSADL RTLWS working group I want to say thanks again to the Nairobi organization team led by Peter Okech for setting up the conference and taking care of transportation, tours to the Nairobi national park, and guiding us safely around. Last we would like to encourage the readers of LWN.net who are involved in organizing workshops and conferences to think about bringing their events to Africa as well in order to give the developers and students there the chance to participate in the community as they deserve.

(The proceedings of the 12th RTLWS are available as a tarball of PDF files).

Comments (9 posted)

Novell acquired by Attachmate

By Jake Edge
November 24, 2010

The big news in the Linux world this week is Novell's agreement to be acquired by Attachmate. While the financial terms of that agreement seem—at first blush anyway—to be a fairly reasonable deal for Novell shareholders, there is something of an odd addition: a concurrent sale of "intellectual property assets" to a newly formed holding company. That CPTN Holdings LLC was organized by Microsoft makes the acquisition more than a little worrisome to many in the Linux and free software communities.

Novell has been trying to find the right buyout offer since at least March, when Elliott Associates made an unsolicited offer to buy the company for $5.75/share. Attachmate offered $6.10/share, but it also gets an influx of $450 million from the asset sale to CPTN, so it is, in effect, putting up less money than Elliott Associates would have. In any case, the Novell board, and presumably its stockholders, are likely pleased with the extra $0.35/share they will receive.

In the 8K filing that Novell made about the acquisition, the assets that are being sold to CPTN were specified as 882 patents. Which patents those are is an open question. While the idea of more patents in the hands of Microsoft and a "consortium of technology companies" is somewhat depressing, it's too early to say whether they are aimed squarely at Linux. Novell has been in a lot of different businesses over the years, so it's possible—though perhaps unlikely—that these patents cover other areas.

While Attachmate is not a well-known company in the Linux and free software world—or even outside of it—it has made all the right noises about what it plans to do with Novell once the acquisition is completed. The press release says that Attachmate "plans to operate Novell as two business units: Novell and SUSE", which may imply that there isn't a plan to break up the company and sell off the pieces—it certainly makes logical sense to split those, basically unrelated, parts into separate business units. Mono project lead Miguel de Icaza has said that Mono development will continue as is. Attachmate also put out a brief statement to try to reassure the openSUSE community: "Attachmate Corporation anticipates no change to the relationship between the SUSE business and the openSUSE project as a result of this transaction".

The 8K mentions some interesting escape clauses for Novell, including the ability to void the asset sale if a better offer for the company and those patents come along. In addition, if the acquisition by Attachmate does fall through for some other reason, CPTN can continue with patent purchase but it must license the patents back to Novell. That license will be a "royalty-free, fully paid-up patent cross license" of all patents that both Novell and CPTN hold (including the 882 in question) on terms that are "no less favorable" than those offered to others outside of CPTN. Essentially, Novell wants to ensure that it can still use those patents if it doesn't get acquired by Attachmate.

Though the 8K is silent about what rights Attachmate will get to the patents, one plausible scenario is that Attachmate is already a member of CPTN. If that's the case, it may be exempt from any patent lawsuits using the 882 Novell patents. That could set up a situation where an attack on various other distributions—but not SUSE—is made. Given the cross-licensing language that is in the 8K, it's a bit hard to believe that Attachmate wouldn't have some kind of agreement in place. That, in turn, could imply that some of those patents are potentially applicable to Linux and free software.

It is tempting to speculate about what this means for our communities—we have done a bit of that here and many are going much further—but it is rather premature. The escape clause certainly raises the possibility that there are other Novell suitors out there, so this acquisition and asset sale may not even take place. If they do, we will find out which of Novell's patents are affected and be able to see what impact, if any, they might have on Linux and free software.

Taken at face value, Attachmate's statements about its plans seem to pose no threat to our communities or to the many members who are employed by Novell. CPTN, on the other hand, may be a potent threat if the patents are used offensively against Linux and free software. While it always makes sense to be prepared for the worst, one can always hope that this particular transaction (or set of transactions) will be fairly neutral. With luck, it may actually increase the income and profits for SUSE and lead to more investment in free software. We will just have to wait and see.

Comments (4 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: MeeGo security framework; New vulnerabilities in php, suricata, systemtap
  • Kernel: Tracing; An alternative to suspend blockers; Ghosts of Unix past, part 4.
  • Distributions: State of the Debian-Ubuntu relationship; Liberté Linux, NetBSD, SimplyMEPIS, ...
  • Development: Lyx 2.0; Buildroot, Claws Mail, Coccinelle, Wayland licensing, ...
  • Announcements: Microsoft helping OpenStreetMap; Novell sold to Attachmate; Google; MPL survey
Next page: Security>>

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds