LWN.net Logo

A tale of two release cycles

As most LWN readers will be aware, the 2.6.21 kernel has been released. The 2.6.21 process was relatively difficult, mostly as a result of the core timer changes which went in. These changes were necessary - they are the path forward to a kernel which works better on all types of hardware - but they caused some significant delays in the release of the final 2.6.21 kernel. Even at release time, this kernel was known not to be perfect; there were a dozen or so known regressions which had not been fixed.

The reason we know about these regressions is that Adrian Bunk has been tracking them for the past few development cycles. Mr. Bunk has let it be known that he will not be doing this tracking for future kernels. From his point of view, the fact that the kernel was released with known regressions means that the time spent tracking them was wasted. Why bother doing that work if it doesn't result in the tracked problems being fixed?

What Mr. Bunk would like to see is a longer stabilization period:

There is a conflict between Linus trying to release kernels every 2 months and releasing with few regressions. Trying to avoid regressions might in the worst case result in an -rc12 and 4 months between releases. If the focus is on avoiding regressions this has to be accepted.

Here is where one finds the fundamental point of disagreement. The kernel used to operate with long release cycles, but the "stable" kernels which emerged at the end were not particularly well known for being regression free. Downloading and running an early 2.4.x kernel should prove that point to anybody who doubts it.

The reasoning behind the current development process (and the timing of the 2.6.21 release in particular), as stated by Linus Torvalds is:

Regressions _increase_ with longer release cycles. They don't get fewer.. This simply *does*not*work*. You might want it to work, but it's against human psychology. People get bored, and start wasting their time discussing esoteric scheduler issues which weren't regressions at all.

In other words, holding up a release for a small number of known bugs prevents a much larger set of fixes, updates, new features, additional support, and so on from getting to the user base. Meanwhile, the developers do not stop developing, and the pile of code to be merged in the next cycle just gets larger, leading to even more problems when the floodgates open. It would appear that most kernel developers believe that it is better to leave the final problems for the stable tree and let the development process move on.

The 2.6.21 experience might encourage a few small changes; in particular, Linus has suggested that truly disruptive changes should maybe have an entire development cycle to themselves. As a whole, however, the process is not seen as being broken and is unlikely to see any big "fixes."

For an entirely different example, let us examine the process leading to the Emacs 22 release. Projects managed by the Free Software Foundation have never been known for rapid or timely releases, but, even with the right expectations in place, this Emacs cycle has been a long one: the previous major release (version 21) was announced in October, 2001. In those days, LWN was talking about the 2.4.11 kernel, incorporation of patented technology into W3C standards, the upcoming Mozilla 1.0 release, and the Gartner Group's characterization of Linux as a convenient way for companies to negotiate lower prices from proprietary software vendors. Things have moved on a bit since those days, but Emacs 21 is still the current version.

The new Emacs major release was recently scheduled for April 23, but it has not yet happened. There is one significant issue in the way of this release: it seems that there is a cloud over some of the code which was merged into the Emacs Python editing mode. Until this code is either cleared or removed, releasing Emacs would not be a particularly good idea. It also appears that the wisdom of shipping a game called "Tetris" has been questioned anew and is being run past the FSF's lawyers.

Before this issue came up, however, the natives in the Emacs development community were getting a little restless. Richard Stallman may not do a great deal of software development anymore, but he is still heavily involved in the Emacs process. Emacs is still his baby. And this baby, it seems, will not be released until it is free of known bugs. This approach is distressing for Emacs developers who would like to make a release and get more than five years' worth of development work out to the user community.

This message From Emacs hacker Chong Yidong is worth quoting at length:

To be fair, I think RMS' style of maintaining software, with long release cycles and insistence on fixing all reported bugs, was probably a good approach back in the 80s, when there was only a handful of users with access to email to report bugs.

Nowadays, of course, the increase in the number of users with email and the fact that Emacs CVS is now publicly available means that there will always be a constant trickle of bug reports giving you something to fix. Insisting---as RMS does---on fixing all reported bugs, even those that are not serious and not regressions, now means that you will probably never make a release.

It has often been said that "perfect" is the enemy of "good." That saying does seem to hold true when applied to software release cycles; an attempt to create a truly perfect release results in no release at all. Users do not get the code, which does not seem like a "perfect" outcome to them.

Mr. Yidong has another observation which mirrors what was said in the kernel discussion:

There is also a positive feedback loop: RMS' style for maintaining Emacs drives away valuable contributors who feel their effects will never be rewarded with a release (and a release is, after all, the only reward you get from contributing to Emacs).

It's not only users who get frustrated by long development cycles; the developers, too, find them tiresome. Projects which adopt shorter, time-based release cycles rarely seem to regret the change. It appears that there really are advantages to getting the code out there in a released form. Your editor is not taking bets on when Emacs might move to a bounded-time release process, though.


(Log in to post comments)

an intermediate example

Posted May 3, 2007 4:05 UTC (Thu) by roelofs (guest, #2599) [Link]

gdb would seem to represent an intermediate stage: moderately regular releases, but with (some) bugs that never seem to get fixed. :-/

In particular, those of us writing C++ code have been bitten time and again by the inability to set constructor breakpoints based on source-code line numbers (and perhaps in other circumstances). This apparently broke with the release of GCC 3.0 and was first reported more than four years ago (against gdb 5.3 and g++ 3.2.1, if not earlier); we're now up to gdb 6.6, and we merely have "some hopes" that it will be fixed in the next release. Granted, it's a difficult problem, but...other hard problems have been solved in, say, a mere two or three years. :-)

Of course, it doesn't help that most of us (including myself) don't have the expertise to help out, nor that it's an FSF project and therefore has the (small) additional hurdle of requiring copyright assignments from potential contributors. So I really can't complain too much ("...but sometimes I still do").

Greg

an intermediate example

Posted May 3, 2007 7:16 UTC (Thu) by bkoz (guest, #4027) [Link]

...the continuing saga of gdb vs. C++ means that elaborate logging structures have been built to debug basic constructs. I consider this a reversion to printf.

I don't think this sad state of affairs has anything to do with the gdb release strategy, but instead is more about gdb vs. g++ when dealing with debug info generation, the difficulty of representing and correctly displaying the full complexity of C++ types with scope info, lack of interest or skill in the gdb community, and the constraints of having to support a wide variety of devices in gdb, many of them obscure, with severe technical limitations, and often poorly documented.

GDB Alternatives

Posted May 3, 2007 8:42 UTC (Thu) by alex (subscriber, #1355) [Link]

Much as I love my GDB command line I can't help but think it's one of those core apps that could do with a step back and re-write. I've looked at the code a few times and it's not pretty which I think makes it a huge learning curve for any wanabe hacker.

Of course it would require some people to actually start that effort. Have you seen any alternatives to GDB? Could this be an area where the monoculture stiffles true innovation in development?

GDB Alternatives

Posted May 3, 2007 9:19 UTC (Thu) by scottt (subscriber, #5028) [Link]

There is frysk: http://sourceware.org/frysk/
If don't mind debugging your C++ program with a debugger written in Java.

GDB Alternatives

Posted May 3, 2007 11:08 UTC (Thu) by mtk77 (guest, #6040) [Link]

I am worried about frysk. It seems that the answer to all problems with gdb is "frysk will make gdb obsolete" but the look like they solve completely different problems.

I'm sure it's not deliberate, but it seems that the almost Microsoft-esque tactic of preannouncing something has the same effect - of putting off potential competitors.

There is no good reason that free software cannot produce something of the quality of the wonderful proprietary debugger TotalView but no work is being done in that direction.

GDB Alternatives

Posted May 4, 2007 4:27 UTC (Fri) by mitchskin (subscriber, #32405) [Link]

Robert O'Callahan (of Mozilla) isn't rewriting gdb, but he's working on an execution recorder that provides enough data for a debugger to reconstruct an entire program execution run down to the instruction level.

blog post introducing the idea

project page

gdb's fundamental problem

Posted May 3, 2007 17:40 UTC (Thu) by JoeBuck (subscriber, #2330) [Link]

gdb is architected to assume that there is a one-to-one correspondence between source code lines and object code positions. This obviously breaks with templates, but what isn't as widely known is that constructors and destructors also have an issue.

The reason that breakpoints in constructors often fail is that g++ (actually, any C++ compiler I know of) creates multiple copies of a constructor or destructor under most circumstances (the so-called in-charge and not-in-charge cases, depending on whether the complete object is being constructed or it's only the base of a derived class), likewise the destructor (whether it's a delete call or not).

The result is that there are two code positions associated with the same source line. Ideally, when you set a breakpoint based on a source line, gdb would put a breakpoint in all code positions matching this source line. Instead, it arbitrarily chooses one.

A tale of two release cycles

Posted May 3, 2007 4:54 UTC (Thu) by eberhardy (guest, #5148) [Link]

what, no comparisons to windows?

in the windows world you get paid to finish your work and your likes and dislikes don't necessarilly enter into it!

A tale of two release cycles

Posted May 3, 2007 5:21 UTC (Thu) by proski (subscriber, #104) [Link]

The issue is orthogonal to operating systems. It's about code quality and release process. There is good and bad code on every OS, written for money and just for fun. Emacs runs on Windows, by the way.

A tale of two release cycles

Posted May 3, 2007 9:40 UTC (Thu) by jospoortvliet (subscriber, #33164) [Link]

many kernel dev's are also paid for their work, still they have an opinion. I think that's also true for Windows/MS developers, though their opinion most of the time just won't be listened too.

A tale of two release cycles

Posted May 3, 2007 16:28 UTC (Thu) by tjc (subscriber, #137) [Link]

in the windows world you get paid to finish your work and your likes and dislikes don't necessarilly enter into it!
People working in the "Windows World" are resigned to the fact that they are working on a hopelessly buggy system that never will be fixed. You either acquiesce to this situation ("it's a paycheck"), or you get out, as I did eight-and-a-half years ago.

A tale of two release cycles

Posted May 3, 2007 5:25 UTC (Thu) by PhracturedBlue (subscriber, #4193) [Link]

To be fair to Adrian, as far as I can tell from reading the thread, he was specifically complaining about releasing with known regressions (some known for more than 1 month), not about other types of bugs that might still be present, so I'm not sure the comparison with Emacs and their search for perfection is entirely just in this case.

a question

Posted May 3, 2007 5:51 UTC (Thu) by mmarkov (guest, #4978) [Link]

What is the difference between a regression and a bug?

a question

Posted May 3, 2007 5:56 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

a regression is breaking something that used to work. a bug is something that didn't work since it was implemented.

a question

Posted May 3, 2007 6:02 UTC (Thu) by johnkarp (guest, #39285) [Link]

A regression is a specific kind of bug. In particular, they are bugs that were not present in
previous releases. Regressions are generally warrant more concern than other bugs, since
they likely break things people have come to depend on working.

a question

Posted May 3, 2007 9:54 UTC (Thu) by ekj (subscriber, #1524) [Link]

A regression is a subclass of bug. All regressions are bugs, but not all bugs are regressions.

A regression is when something fails to work that used to work previously.

That is bad -- because it causes stuff that used to work to stop working, which annoys users.

It's usually less bad to ship a program with something non-working that *never* worked. Sure it's a bug, but if the users where OK with the last version of the program, they'll be ok with this one too.

regression vs bug

Posted May 3, 2007 17:40 UTC (Thu) by giraffedata (subscriber, #1954) [Link]

To be precise, you have to call what we're discussing here a "regression bug." Because there are regressions that are not bugs. A bug is where the product does not work as designed. Sometimes you design a release to lack functions the previous release had, so the release contains a regression, but not a bug.

Design regressions cause all the same damage as regression bugs (breaks expectations, causes people not to "upgrade"), but the process for eliminating them is entirely different.

BTW, "regression" is from the Latin for "to step back."

regression vs bug

Posted May 4, 2007 17:48 UTC (Fri) by i3839 (guest, #31386) [Link]

To give another kind of non-bug regressions:

A performance degradation is a regression too, while it's not always caused by a bug. So nothing stopped working, something just worked less well than it used to.

"stay the course" vs. fork?

Posted May 3, 2007 7:06 UTC (Thu) by bkoz (guest, #4027) [Link]

One of the vaunted advantages of the GPL is the ability to fork when a maintainer is being disagreeable or otherwise unrealistic. Perhaps this is that time?

This part of the email:

Sadly, RMS seems determined to "stay the course", instead of adopting
strategies that have been proven to work in other software projects.

Reminds me of gcc, pre-egcs.

"stay the course" vs. fork?

Posted May 3, 2007 7:13 UTC (Thu) by tetromino (subscriber, #33846) [Link]

fork is this way ---> http://www.xemacs.org/

"stay the course" vs. fork?

Posted May 3, 2007 7:59 UTC (Thu) by bkoz (guest, #4027) [Link]

This is the old fork: I believe there is already substantial divergence between emacs 22 and xemacs 21.5.27, which is not the point. (Which is to get the current FSF emacs 22 sources out, and the emacs project back on a more regular release schedule.)

"stay the course" vs. fork?

Posted May 3, 2007 9:22 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link]

The true, strategic horror of emacs/xemacs is that significant manhours are wasted trying to maintain codebases that work well in either environment.
Talk about pushing jell-o up a hill with a toothpick.

"stay the course" vs. fork? (forking the question thread, ok?)

Posted May 3, 2007 13:43 UTC (Thu) by TxtEdMacs (guest, #5983) [Link]

I read that the xemacs group wanted to merge back into emacs, within the last year or so. It was my impression that RMS rejected the offer out of hand and perhaps spitefully. Would you happen to know if there is any truth to my, admittedly, superficial impression? I remember too when RMS rejected an xemacs request to use the emacs documentation. That may explain why I have the predisposition to interpret RMS's stances negatively, however, I am aware of my bias and facts can change my opinion.

Several years ago I used xemacs running on a Unix over straight emacs, I think it was in the version 20.x series (21.x was released about that time, but I didn't upgrade). At the same time I was trying not to use Linux too much at home, but there too I thought I preferred xemacs over emacs, though neither worked quite the same as under Unix. I thought the xemacs under Unix was easier than either under linux, however, now I just remember to hit the delete key first when typing into a highlighted region.

Now using Linux all the time, I prefer emacs, where my use of either the Mac or Windows is so minor as to not warrant mention.

xemacs merge?

Posted May 10, 2007 7:46 UTC (Thu) by anton (guest, #25547) [Link]

I read that the xemacs group wanted to merge back into emacs, within the last year or so. It was my impression that RMS rejected the offer out of hand
I have not heard of that, but if they wanted to merge, the FSF would want them to assign the copyright to the FSF, and I don't think the xemacs people can do that. Their distributed copyright situation also means that they cannot switch their manual to the GFDL, and thus xemacs cannot incorporate parts of the GNU Emacs manual.

"stay the course" vs. fork?

Posted May 3, 2007 9:05 UTC (Thu) by jschrod (subscriber, #1646) [Link]

While I'm an old-time XEmacs user myself, one must confess that XEmacs' development doesn't go forward smoothly either. There are too few active developers in that branch.

release early, release often

Posted May 3, 2007 9:55 UTC (Thu) by johoho (subscriber, #2773) [Link]

This was the motto years ago for open source projects. Is it only my opinion, that as projects grew mature, they tend to release less early/often? Seems like loosing the connection to your roots..

Wik

release early, release often

Posted May 3, 2007 14:02 UTC (Thu) by drag (subscriber, #31333) [Link]

Well as they mature you get a wider audiance at the same time that there is less need to actually push forward as hard.

Software tends to go through a life cycle. I think I may of gotten a variation of this from a ESR book...

Baby Stage: 'Stratch your own itch' project. Developer sees a need and develops the basic software. Makes a lot of assumptions, makes a lot of guesses. He doesn't have a lot of information to work on so he is making it up as he goes along.

Childhood: If the project survives it's own birth it will attract new developers. The project is small and nimble with a relatively focused and small code base. People are excited, hype is generated about the possibilities. Lots of amazing work, lots of innovation and new ideas.

Adolescence: The code base is bloated and large now compared to what it used to be. Many original developers are driven away by politics and often are more interested in starting something new and exciting rather then maintaining a bloated code base into the future. Often marred by large amounts of missing functionality and lot of the stuff that works may be worthless to most people. Things the original developer thought was important turns out to not be. Things that were considured to be small details turned out to be big roadblocks. Original code base is stressed out. etc etc.

So here is were most project die or at least go into maintainance mode unable to proceed much further. People go on and take the lessons learned and start new projects (going back to step 1).

If it survives adolecense and survives being rewritten then it will enter maturity, it's golden years. The core code is refined, bug free, reliable. Usually it will be extensible and be able to meet the needs of many different people without having to resort to hacks and headaches.

By that point most people will stop caring about it. It will be mature, boring, and it will 'just work'. Somebody may have a eureka moment and it will have new hype and new development.. but mostly the hype will be focused torwards projects that use this project as a dependency.

If you can imagine some very common projects....

Like, for example:

RCS to CVS to SVN for client-server version control systems.
Then people trying different approaches with Git and Arch.

Going from new --> mature to new --> mature and over again. Each replacement improving and refining their respective approaches. The solutions and advances the CVS developers worked on paved the way for SVN and other newer revision control systems.

Or another big example is the XFree86 to X.org fork.

They took a dying middle-aged project and turned it back to childhood through a fork. Using lessons learned in the past they may pull it off it will skip adolescence and drift happily off into it's golden years. Then new users adn even existing users will stop caring about X at all.. because they will no longer have to. The interesting things will be what people are doing with X.

Lots of projects are following this pattern. Debian/Ubuntu, GCC/EGC, the refining of Gnome. Gnome 1.x vs 2.x. The ABI/API purge of KDE3/QT3 to KDE4/QT4 transition. etc etc.

Of course major software projects may go through many multiple cycles like this.

Not a release cycle length problem

Posted May 3, 2007 11:54 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

I don't like this article too much.

Adrian was not complaining the release cycle was too short. "You want over-long cycles" is the spin people who didn't listen to him put on his message. Ideed some of the examples Adrian pointed at were problems reported a month ago (so there was definitely no lack of time to look at them).

Adrian was objecting to the general attitude consisting of silently ignoring problem reports if you feel like it.

His argument went this way. Developpers used to complain:
- no one tested rcs
- testers didn't write good bug reports
- bug reports were submitted via bugzilla instead of direct mail
- reports arrived to late for problems to be fixed
- people lumped toguether all kind of problems (RFEs, long-lurking bugs never reported before, hardware problems, etc)

So Adrian:
- collected the supposedly non-existent rc problem reports
- reformated them
- wrote personnalised repeated mails difficult to miss
- documented their early date of submission
- kept only regressions reports (bugs we know were introduced by recent code changes)

This is a mass of heavy and painful work to undertake. And despite all this documentation efforts, some problems which had no good reason not to be fixed were not even looked at. And no one saw any problem with this (Indeed people told him to lower his expectations, and that of course they could ignore as many reports as they wanted. And when he asked for process/tool changes to make his work easier people asked him to do even more work without any promise to look at the result)

I perfectly understand his reaction. This was a blatant lack of respect towards his work (and the reporting work others did before, which he was the only one to acknowledge). When you ask a lot of volunteer work of someone the minimum is to do something with the result.

You have healthy projects where dev & test teams respect each other, and you have projects where developpers play prima-donas and consider every other bit of the ecosystem canon fodder. I guess the ugly truth is that the Linux project is in the second category.

We'll soon see if Adrian is replaced and if alienating him to save some inconvenient developper effort was the right thing to do.

Not a release cycle length problem

Posted May 3, 2007 12:16 UTC (Thu) by davecb (subscriber, #1574) [Link]

Agreed, it's a quality management problem... and a staffing problem in the QA part of the process.

A former employer, having become mature (and perhaps even a bit stogy) used to use the "bus model" of releasing software, where a release would wait until the bus was full people, then leave for its destination. This annoyed everyone, not least of whom was QC.

They now use a "train model". Every three months, there is a freeze on the current release stream, and the QC cycle starts on it. It takes about a month to exhaustively test everything, and two more to get the regressions fixed and the brand-new bugs whacked down to a credible level. Then they release.

While they're finding and fixing regressions, th fixes are applied to the about-to-be-released code and also to the current development stream.

This, however, is just the "cycle" aspect of changing the release architecture to provide enough time for QA to happen

As it's mildly hard to get developers into QC, the company makes working in the QC process an unofficial prerequisite to becoming a member of the kernel team. And it's a good way to learn enough to be able to break into serious development.

Think of a different kind of "kernel janitor" (;-))

--dave

Not a release cycle length problem

Posted May 3, 2007 18:12 UTC (Thu) by giraffedata (subscriber, #1954) [Link]

As it's mildly hard to get developers into QC, the company makes working in the QC process an unofficial prerequisite to becoming a member of the kernel team

Sometimes, a company just makes QC work an official prerequisite of collecting one's paycheck; i.e. it hires testers for whatever they cost.

The reason the Linux kernel can't use this well-worn strategy for eliminating bugs is that the special economics of open source and community development don't provide a way to get people to do all that boring alpha testing (testing for the sake of testing) and debugging.

So we've been experimenting for years with release cycle lengths, bug tracking systems, etc. to try to find another way.

Not a release cycle length problem

Posted May 3, 2007 18:38 UTC (Thu) by davecb (subscriber, #1574) [Link]

I think we're in violent agreement (;-))

I noticed people break into Linux kernel hacking via the kernel janitors effort, and into at least one non-Linux kernel team by fixing bugs and escalations, so I speculated one might invite people to look at regressions as a good way of learning the newest code to go into the kernel.

--dave

Not a release cycle length problem

Posted May 3, 2007 19:14 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

> The reason the Linux kernel can't use this well-worn strategy for
> eliminating bugs is that the special economics of open source and
> community development don't provide a way to get people to do all that
> boring alpha testing

It seems Adrian complains the alpha testing part is done. What's not done is exploiting all the testing reports.

Not a release cycle length problem

Posted May 3, 2007 22:05 UTC (Thu) by giraffedata (subscriber, #1954) [Link]

The reason the Linux kernel can't use this well-worn strategy for eliminating bugs is that the special economics of open source and community development don't provide a way to get people to do all that boring alpha testing
It seems Adrian complains the alpha testing part is done. What's not done is exploiting all the testing reports.

Chopping the quote where you do, it looks like you're disagreeing. The end of that sentence is "and debugging." I assume it's the debugging part that nobody is signing up for.

I also strongly suspect that in the cases that concern Adrian, the testing done was beta testing, which engineers don't find nearly so objectionable. Beta testing is where you fire up the new code and try to use it for real work. Alpha testing is where you simulate using the code, for no gain other than flushing out bugs in it. That's what's boring enough that engineers seem to have to be paid to do it.

Not a release cycle length problem

Posted May 3, 2007 18:55 UTC (Thu) by iabervon (subscriber, #722) [Link]

The problem, as I see it, was that things got to the point where the entire community was waiting for no more than 14 developers to debug things that nobody else had the appropriate knowledge to work on. (And some of these developers were waiting for bug reporters to get back with more information, too). And adding more people to debugging a small number of regressions would just slow down the process (c.f. Mythical Man Month). The only application of additional developer effort I could see actually being helpful at the point when 2.6.21 was released would be if Alan Cox could be convinced to fix serial port drivers (since he's got experience in that area he's not using).

I think, actually, that 2.6.22 wasn't forked soon enough; I think that the delay of the merge window meant that too much code was sitting in development without testing, leading to a drop in the quality of code appearing in -mm (which Andrew commented on for the 2.6.22 merge plans). On the other hand, I don't think the code tagged as 2.6.21 deserved the penguin pee. The real issue is that 2.6.x isn't released according to a release engineering process which creates stable results. Between the last -rc and 2.6.21, non-obvious patches were merged that added regressions. If this sort of thing happens, having more -rcs and spending more time couldn't possibly help. The solution, I think, would be for Linus to release even earlier, but have each series go through a period run by the -stable team with -stable rules before a final release that gets a penguin-pee version number.

Not a release cycle length problem

Posted May 3, 2007 19:21 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

I think Adrian would have been happy if the various subsystem maintainers had made sure someone looked at each bug (or notified him some would not be processed). Reading his messages it seems no one even bothered with this. I'm pretty sure that does not qualify as "waiting for".

A tale of two release cycles

Posted May 4, 2007 18:28 UTC (Fri) by JohnNilsson (guest, #41242) [Link]

Some time ago I suggested a merge queue model. Here's a variant on that theme.

If a regression is found. Always remove the code that introduced it and refuse to merge it before the regression is gone. The basic principle being that this would create an incitement to fix the regression if people want the code merged.

Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds