The Linux Kernel's Fuzzy Future (InformationWeek)
IBM's Frye sees no reason for the Linux camp to produce its own road map, arguing it's better to keep customers focused on 'what's there today.' Besides, he says, CIOs can get closed-door briefings from Linux distributors if necessary. Yet, his explanation seems a bit like a rationalization for a community-oriented development process that simply hasn't gotten around to centralized, long-term planning."
Posted Dec 6, 2004 15:11 UTC (Mon)
by cpm (guest, #3554)
[Link] (2 responses)
The good folks who are doing this marvelous job developing the linux kernel are not participating in the age-old practice of "Don't buy my competitors product today, because the product I am going to introduce in the future will blow it away" hype and garbage that essentially constitutes public vapour ware sales campaign that goes by the name of the Microsoft roadmap.
And this is a bad thing?
Posted Dec 6, 2004 15:28 UTC (Mon)
by sbergman27 (guest, #10767)
[Link] (1 responses)
Posted Dec 6, 2004 23:59 UTC (Mon)
by Duncan (guest, #6647)
[Link]
Posted Dec 6, 2004 15:26 UTC (Mon)
by gowen (guest, #23914)
[Link]
Posted Dec 6, 2004 15:43 UTC (Mon)
by MathFox (guest, #6104)
[Link] (1 responses)
So, what happens in practice: There are dozens of partial roadmaps (wanted features, structural redesign plans). In various discussions, things are decided. Through contributions of dozens of parties, the kernel grows in the general direction that the users want.
Posted Dec 6, 2004 17:40 UTC (Mon)
by southey (guest, #9466)
[Link]
Posted Dec 6, 2004 15:48 UTC (Mon)
by epeeist (guest, #1743)
[Link]
What he misses is that it is all very well to have a three year plan, but you have also got to deliver on it. How many features have been dropped from Longhorn so that it can be delivered when it is scheduled? How much QA will have been skipped to ensure that it is delivered on time? How does a plan you don't keep to help your customers? In many respects the reason for a three year plan is to keep end customers locked in, expecting jam tomorrow.
Posted Dec 6, 2004 15:51 UTC (Mon)
by cr (guest, #3685)
[Link] (1 responses)
Posted Dec 6, 2004 20:29 UTC (Mon)
by josh_stern (guest, #4868)
[Link]
Posted Dec 6, 2004 16:38 UTC (Mon)
by donstuart (guest, #4550)
[Link]
By the way, does anyone else wonder about his name?
Don
Posted Dec 6, 2004 17:02 UTC (Mon)
by russelst (guest, #24599)
[Link]
An example of this that shows both sides of the same problem is that since Linux didn't have a concrete roadmap, large components of the community were able to get behind Xen fairly quickly over UML. The inertia coming out of OLS for Xen was quite impressive. Not that UML is dead, or Xen is totally ready, but there was a strategic shift there than I have a hard time seeing in a large corporate software development shop. The engineer that had suggested to management, that in turn promised customers 'UML-virtualization' would have a hard time shifting as quickly as an unstructured market with shared responsibility. Case in point, SuSE is likely to have to spend substantial effort supporting and improving UML while other distros have the option of supporting Xen (or any better options coming along).
Posted Dec 6, 2004 17:07 UTC (Mon)
by andyo (guest, #30)
[Link] (1 responses)
Posted Dec 6, 2004 17:29 UTC (Mon)
by corbet (editor, #1)
[Link]
Posted Dec 6, 2004 17:10 UTC (Mon)
by copsewood (subscriber, #199)
[Link] (2 responses)
LOL.
Posted Dec 6, 2004 17:55 UTC (Mon)
by emkey (guest, #144)
[Link]
In any case your point still stands.
The computer industry is so volatile that I don't realistically think that anyone can make promises more then a year or so out and even that is risky.
Posted Dec 6, 2004 22:54 UTC (Mon)
by bk (guest, #25617)
[Link]
Posted Dec 6, 2004 17:57 UTC (Mon)
by ballombe (subscriber, #9523)
[Link]
"Without stable, reference milestones it will artificially increase the barrier to moving beyond dependence on a single vendor," one observer wrote in July in an online forum on LWN.net, a Web site focused on Linux happenings. "Innovation outside the corporate sponsored environment dries up ... Goodbye, Linux."
Posted Dec 6, 2004 18:29 UTC (Mon)
by iabervon (subscriber, #722)
[Link] (8 responses)
The real issue is that it doesn't generally take Linux development three years to get from an idea being sufficiently thought-out that it could go on a roadmap to having it done, at least for features with any publicity. A lot of features come from people realizing that something useful is now really easy, due to other work.
On the other hand, it might be useful to have a running list of things that people have announced that they are working on.
As for the fears about the changes in the development process, they run entirely contrary to the actual evidence. Assuming 2.6.10 comes out in about a week, there will have been 11 major releases of 2.6 (counting 2.6.8.1) in 12 months. For 2.4, under the old process, in the first year, there were 17 releases. 2.6 has only recently completed the period, common to all stable series, of having monthly releases, so it is too soon to say anything about the regular rate of releases, although it looks like it is probably not going to be more than twice as fast as 2.4 was once 2.4 settled down (2.4 had a long fast period, then a couple of slow periods, and recently has gotten more consistent).
There has been a concern that the 2.6 series wouldn't include kernels that end users could actually use, and that they would have to use distribution-provided kernels instead. However, in 2.4, no major distribution provided an unmodified kernel, or even close, because they were all backporting changes they felt necessary from the 2.5/2.6 tree, which meant that it was often impossible to go from a distribution kernel to a kernel.org kernel and have the system continue to work correctly. For example, one 2.4 kernel I've used is linux-2.4.19-rmk7-pxa1-cerf1; this is at least 9 versions added onto the official 2.4 kernel to get it to work, and it doesn't include bugfixes in the past 9 2.4 kernels. The 2.6 kernel for the board is 2.6.7-cerfb1, with one set of patches and tracking the official kernel pretty well (2.6.8.1 was the newest kernel at the time).
The main points of the new process are really to minimize the differences between the kernels that distributions were shipping and the kernels that the developers were releasing, and avoid the periods at the start of a stable series where versions come out monthly and don't work very well. A better way of thinking about the new process is that, in the old way, there was a lot of stuff that got backported by distros in divergant ways; in the new process, the kernel team does the backports so they stay consistant. The development series is kept similar, such that bugfixes can be forward-ported to the development kernels as well, and things don't regress going into the next major version due to fixes not propagating. From the standpoint of terminology, it makes sense to keep the version numbers comparable, and the version numbers matter most to the stable kernels, so those numbers are used.
The only issue I see, currently, is that the fixes to the version numbering made in the 2.4 series, where -pre versions were known to not be ready, -rc versions could really become final at any point, and final versions were really quite reliable. I still think it's very odd that Andrew Morton is more in charge of the development series and Linus is more in charge of the stable series.
Posted Dec 6, 2004 18:42 UTC (Mon)
by allesfresser (guest, #216)
[Link] (7 responses)
Am I just extremely lucky or is there something I'm missing?
Posted Dec 6, 2004 18:57 UTC (Mon)
by emkey (guest, #144)
[Link]
In theory vendor kernels should be better since they likely do more formalized testing. In practice I doubt there is a huge amount of difference.
Posted Dec 6, 2004 19:04 UTC (Mon)
by maceto (guest, #16498)
[Link] (1 responses)
Posted Dec 6, 2004 21:49 UTC (Mon)
by iabervon (subscriber, #722)
[Link]
Posted Dec 6, 2004 19:16 UTC (Mon)
by iabervon (subscriber, #722)
[Link]
I believe the reason to use a distro kernel (back when it mattered) was that development kernels were frequently very broken, but stable kernels didn't support new stuff. It's plausible that, when Red Hat released their first 2.4 kernel which supported NTPL, that was a more stable kernel than the 2.5 kernel which was current at the time. So the argument is really that there are some kernel.org kernels which don't run (very true, especially in early 2.5), and that some distro customers want features not in any working kernel.org kernel at some point in time.
Posted Dec 7, 2004 1:59 UTC (Tue)
by error27 (subscriber, #8346)
[Link]
I've seen tons of problems where people upgraded to a non-RH kernel. For example, storView broke because Apache broke because it was compiled specifically for one kernel. Or RPM stops working unless you look up on google that you need to export a shell variable. Or stuff uses feature that are in RH 2.4.18 kernel but are not included in the 2.4.20 stock kernel.
The point is that if you move to a stock kernel you might be downgrading even if the number is higher. Downgrading is a pain. If you downgrade from an SE kernel to a kernel from half a year ago then you get fs corruption.
Posted Dec 7, 2004 6:13 UTC (Tue)
by wolfrider (guest, #3105)
[Link]
(Granted, it's been a few years since I even *touched* a RH box - but when they released a CD [~1998? 1999?] where you couldn't even **recompile the kernel** successfully with the **default software** after installation, I swore off RH forever... And switched to SuSE 7.3.)
--I've had good luck with SuSE, and even better luck with Debian (Knoppix.)
Posted Dec 7, 2004 19:04 UTC (Tue)
by dlang (guest, #313)
[Link]
if the vendor kernel is 2.4+200 patches implementing additional features then the toolset that is included with that kernel may be somewhere in between what 2.4 and 2.6 need and may not work with either a 2.4 kernel or a 2.6 kernel
RedHat has been the worst case example of this historicly (or at least the most visable) and preventing that is one reason why the 2.6 development is happening the way it is.
Posted Dec 6, 2004 20:13 UTC (Mon)
by Baylink (guest, #755)
[Link]
He was a salesman (and later president) of an Allied Van Lines agent here in Florida. When he was just a young turk salesman, one of the older guys gave him a great selling idea: when talking to the potential customer, work into the conversation, just off hand, that "a good salesman would never smoke in your house" (this was the 60's :-).
That is, put the other salesmen at a disadvantage they don't even know they possess, by inventing some cock-a-mamie criterion that they don't know they're being judged on.
That's what's happening here, too.
"We have no Five Year Plan. We must be Bad!"
Nuh uh.
Cause, I mean, c'mon: go look at the companies in the Unix market that *do* have a Five Year Plan. They're in, by and large, about as good shape as *another* organization notable for Five Year Plans.
Oh, wait: it's pretty much only SCO (and they're a fucked company walking), and Sun (and they're "open sourcing"). Oh, and AIX, and IBM's given us most of the good parts.
Got it.
Yeah; let's put on the Jack Welch hat, and do some strategic planning.
Cause that's workin' real well.
(This message brought to you by the "So many things are just me" radio network.)
Posted Dec 6, 2004 20:21 UTC (Mon)
by Max.Hyre (subscriber, #1054)
[Link]
Dear Mr. Foley:
I'm a bit puzzled by your article. When Microsoft supplies a
``roadmap'', it sounds like you're getting a chart of an area with all
the possible paths laid out for you to choose. In actuality, so far
as I can see, such a roadmap is more a travelogue describing the only
way you can go. Yes, it's nice to plan for what MS will offer next,
but there's not a lot of choice involved, especially if what you need
is missing.
Linux, on the other hand, has no roadmap because its advances are
unplanned---they occur as people improve on extant capabilities, and
others add what they need. Who could have planned IBM's addition of
JFS, or NUMA, or RCU three years before each was released? That
early, it was unclear whether IBM even planned to play nicely.
Under Linux, on the other hand, useful additions are rapid. If the
current kernel meets your specifications, why care about a roadmap?
If it doesn't, check whether anyone's working on the required
function. If so, send resources. If not, hire a kernel hacker or two
and add it now, not when a roadmap shows it's planned for. If the mod
is accepted into the mainline kernel, you're in clover. Even if it
isn't, you've still got the kernel you need, and if it's writeable
as a module, it won't be too hard to keep it current with newer kernel
releases.
So, what would a roadmap (assuming anyone could define and enforce
one) do for you? It's another unnecessary evil that has been
eliminated from Linux.
Posted Dec 6, 2004 21:10 UTC (Mon)
by rjamestaylor (guest, #339)
[Link]
Posted Dec 6, 2004 23:41 UTC (Mon)
by glenalec (guest, #26113)
[Link]
Posted Dec 7, 2004 0:07 UTC (Tue)
by Duncan (guest, #6647)
[Link]
Posted Dec 7, 2004 1:16 UTC (Tue)
by Nelson (subscriber, #21712)
[Link] (1 responses)
If CIOs are focused on the Linux kernel release and development schedule then those companies probably have other issues. It would be far better to list of some kernel technologies that are missing and mission critical to their businesses.
Posted Dec 9, 2004 17:59 UTC (Thu)
by sepreece (guest, #19270)
[Link]
Those of us who build products (or systems) typically aren't interested in being OS houses, too. We generally want to spend our resources on building our domain functionality and want to be able to treat the OS as a platform that we don't have to worry about.
For us, building consumer products on a non-X86 architecture, this means that we get our Linux from a distributor that does architecture and feature tailoring for us, and if we need specific features we may work with a third-party to develop them, under whatever license is appropriate.
This isn't a complaint, it's just a description of [our perception of] the way OSS development works. And it's fair to say that it is a barrier to Linux adoption for at least some potential adopters. I think that was the article's point - that the lack of plans for extending Linux in specific directions and the lack of a stabilization approach in the new development model are a problem.
There are benefits that we balance that problem against, and, so far, we believe that the benefits outweigh the problems (or the cost of working around the problem) for us, but that doesn't mean that it isn't a problem.
Posted Dec 7, 2004 2:28 UTC (Tue)
by dkite (guest, #4577)
[Link]
Posted Dec 7, 2004 7:47 UTC (Tue)
by job (guest, #670)
[Link] (1 responses)
Posted Dec 8, 2004 6:41 UTC (Wed)
by Duncan (guest, #6647)
[Link]
Posted Dec 7, 2004 12:02 UTC (Tue)
by ekj (guest, #1524)
[Link]
In short, he is complaining about a non-problem. There *are* real problems that needs to be solved to improve on Linux and free software in general, but a 3 year plan clearly isn't it.
Posted Dec 7, 2004 20:28 UTC (Tue)
by ciunas (guest, #26500)
[Link]
Sure a Proprietary OS vendor can trumpet 5 year plans and can discuss
Similarly eatures that get contributed to the Linux kernel tend to be end
So the same applies to a Linux distro, Sun and Microsoft. Of course
For an attempt at a proper comparison the writer should have talked to
The planning committee for Microsoft consults market research and makes
Posted Dec 8, 2004 0:46 UTC (Wed)
by dankohn (guest, #6006)
[Link] (1 responses)
I would love an article every 6 months or so where LWN presents Linux's 1 to 3 year plan. Of course, to be comparable to the Windows roadmap, you'd want to include all the related software that regular people think of as part of the OS. So, you'd report not just on improving pre-emption and Infiniband support in the kernel, but summarize the roadmaps for X.org, Gnome, KDE, Firefox, yum, OpenOffice, etc. And, you'd want to spend special time talking about projects like Xen and Reiserfs that will bring powerful new features when and if they're incorporated.
And, what about the fact that the roadmap would certainly be wrong? So what? It would likely be just as accurate as Microsoft's, and LWN's opinions of where things are going are just as valid and relevant as Linus's, Stallman's, or anyone elses.
In fact, I think this kind of roadmap could be great PR for LWN, where you could issue a press release every 6 months detailing the major advances and changes in the 3 year plan. This would likely be picked up by the rest of the tech press (and sometimes even the NYT and WSJ), and bring LWN a lot of new subscribers. (Note that the dumb Information Week article prompting the idea already references lwn.)
The roadmap itself could largely consist of links to old articles on each area and to the external roadmaps that the individual projects publish. So, there probably would not even be that much work involved. But, the lwn roadmap could be a central repository of links to cool stuff coming down the line.
Posted Dec 8, 2004 6:53 UTC (Wed)
by Duncan (guest, #6647)
[Link]
Umm, Lesse here.Linux Kernel Vapourware?
Throughout the article, the author makes a tacet assumption that the road map that MS publishes is somehow accurate. When is the last time that a major product came out of the company on time and containing the features that they said it would three years previously?Linux Kernel Vapourware?
No kidding. How much of the former Longhorn has been removed (so much so Linux Kernel Vapourware?
it's shorthorn now <g>, but that's not an original observation), and how
far has the time slid? What about other major changes? Was Chicago on
time? Wouldn't it have been MSWormOS 93 if it had been? What about the
roadmap saying they were going to unite the 32-bit kernels with what
became MSWormOS 98? That slid past that, past 98SE, past ME and 2K, and
finally happened with the privacy challenged eXPrivacy. (I'd been long
awaiting that change, but was bitterly disappointed when it ended up
coming with invasive technology I simply could not and would not accept.
All the better for me tho, in the end, as it finally pushed me into
defecting to the free software world, something I may have never done on
my own. And no, I never intend to go back, either, except possibly if its
government changes so it becomes a part of the free software world too.)
If we wished, I suppose Linux could have a map of a bunch of stuff that
WON'T be available in three year's time, too. Only we'd have to make it
really ridiculous (OK, I don't think Linux will be handling devices on
Alpha Centauri in three years, since even at light speed, we couldn't get
there by then <g>), because as fast as it develops, we might end up having
the features in spite of ourselves! <g> Anyway, what sort of good would
such a map of stuff that /won't/ be available do?
Duncan
The Linux Kernel's Fuzzy Future (InformationWeek)
That approach stands in contrast to the one taken by Bill Gates' team at Microsoft, where Windows releases aren't only planned, but publicly disclosed, for the next three years.
And, every once in a while, those release dates and promised new features are delivered upon.
No-one knows where Linux will be like in 3 years. No-one knows what Windows will be like in 3 years. The only difference, is that commercial vendors are obliged to pretend that they do.
When there would be a roadmap for Linux, who can enforce it?The Linux Kernel's Fuzzy Future (InformationWeek)
Linus and the OSDL can only excert pressure as long as their (short) financial arm reaches. Linus can have wild plans for the kernel, but when he doesn't get enthousiasm in the developer group nothing will happen.
Sorry, this is the process. And you can influence the result; try that with Microsoft's roadmap.
What world domination is not sufficient a roadmap? :-)The Linux Kernel's Fuzzy Future (InformationWeek)
One of the reasons that it is difficult to produce a three year plan must be the dependence on future hardware technology. In the PC world this is to a large extent dependent on what Microsoft and the big manufacturers declare they are going to do at WinHEC.The Linux Kernel's Fuzzy Future (InformationWeek)
From my reading, this article is voiced by the journalist, saying, "Why can't you bazaar guys be more like the cathedral guys?" To which, the obvious answer is, "Because their method doesn't WORK (for us)!" Methinks he has confused engineering and marketing, a common problem for those who report on Microsoft.Who's complaining here?
Yes, but as other posters have pointed out, that particular method doesn't Who's complaining here?
really work for them either. The best that their "roadmap" can be taken
to mean is that such and such a feature is getting serious consideration
for inclusion in upcoming releases. The comparable thing for Linux,
which might be interesting to some but non-essential for everybody, would
be to have a kind of "department" entry for the Linux kernel describing
who the major developers have been in the past and what projects they are
currently working on or considering working on in the upcoming future.
That would completely capture all the value of a "road map" and then some,
since it could help a company could figure out if it needed to pay someone
to work on a Linux feature they want.
He seems to think a 2.odd development kernel would be a suitable roadmap. Apparently, one way to meet his desire for "predictability" is to put completed improvements on the shelf for a year or two so we can talk about them.The Linux Kernel's Fuzzy Future (InformationWeek)
There's an interesting assumption here that having concrete long range goals is good. Its one of the challenges that both large corporations and large software development shops have to deal with. When you create artificial critical events, achieving the event often becomes more important than achiving the business benefit. The Linux Kernel's Fuzzy Future (InformationWeek)
The kernel as released on kernel.org could look a lot different from a kernel installed from a distribution, too. It was hard to predict that Red Hat would release a 2.4 kernel with 2.6 features incorporated. And I'm not criticizing them for doing that, just saying it muddies predictions.Not every vendor's kernel is the same
One of the core ideas behind the current development model was to minimize the divergence between the mainline kernel and what the distributors ship. Getting features into the mainline reduces the temptation to backport them to older kernels.
Not every vendor's kernel is the same
There was a Soviet Union which, as every good socialist knew, was going to conquer the universe because of the inherent technical advantages of its 3 year industrial production plans.Once upon a time on a far distant planet within the galaxy ...
Weren't they five year plans?Once upon a time on a far distant planet within the galaxy ...
How about 4 year presidential campaign economic plans?Once upon a time on a far distant planet within the galaxy ...
Do not miss the best part:The Linux Kernel's Fuzzy Future (InformationWeek)
It's a bit surreal to get to the point where he admits that Microsoft's three-year roadmaps aren't accurate, and have him claim that having additional inaccurate information would still be useful. In that case, shouldn't he just make up a roadmap for Linux?The Linux Kernel's Fuzzy Future (InformationWeek)
I've been puzzled for a long time, (and this is no intended slight to you, iabervon, just an quizzical observation) that many people say it's impossible to run with a vanilla kernel.org kernel. I've *never* run a distribution-provided kernel--always a kernel.org variety--and have never had any problems with stability or dysfunctionality. I'm currently running vanilla 2.6.9 on all three of my machines at home and one at work, and they run like a charm. There's a mix of old and new hardware (Epia 600, Athlon 600, P4 2.6 and P4 1.0) and it doesn't seem to make any difference--the kernel just chugs right along. I'm running Slackware on all of them. I've run things this way since around 1.3.57, I believe... so I've had a while to observe... :-)The Linux Kernel's Fuzzy Future (InformationWeek)
I've had fairly good luck with the generic kernel in the past. And in some cases I've encountered bugs in vendor kernels that didn't exist in the generic kernel.The Linux Kernel's Fuzzy Future (InformationWeek)
hehe you might have a point :-) it has something to do with the user, but I have to say there have been issues with the 2.6 series. And a more tested kernel (debian/redhat/suse) is more TESTED than a vanilla one and might have some bugs fixed, however there are called new/experimential stuff in the kernel and "non -essential/-server stuff" that one can ( yes it`s possible!) leave out... The Linux Kernel's Fuzzy Future (InformationWeek)
Now that there's better communication and flow of patches into the kernel, the bugs fixed in a distro kernel will probably be also fixed in the vanilla kernel from the same time (which will be a later version than the basis of the distro kernel). The issue is the rate at which bugs which affect the function of features already in the kernel are added. It has seemed that almost all of the regressions have been caught quickly, were necessary evils to fix a more serious problem quickly, or involve uncovering BIOS bugs.The Linux Kernel's Fuzzy Future (InformationWeek)
I've actually personally run only kernel.org kernels on my computers, although I've run distro kernels on all the computers that weren't my own. My experience is that kernel.org kernels rarely have problems, and I've never had any problem I couldn't solve by getting the latest kernel.org kernel. I actually ran 2.4.0 and 2.6.0 for at least a few months when each of these came out.The Linux Kernel's Fuzzy Future (InformationWeek)
Slackware and Debian etc use more stock kernels than RedHat and SuSE.The Linux Kernel's Fuzzy Future (InformationWeek)
--The only time I've seen problems when attempting to run a "vanilla" kernel.org kernel on a "Linux system" is with the Red Hat distro.The Linux Kernel's Fuzzy Future (InformationWeek)
the issue is that major standard kernel version changes useually require tool updates (2.4 to 2.6 involves changes to proc, modules, etc earlier versions had more changes)The Linux Kernel's Fuzzy Future (InformationWeek)
a client told me once.
This reminds me of something
[Copy of reply I posted on their site.]
The Linux Kernel's Fuzzy Future (InformationWeek)
Best wishes,
Max Hyre
Sounds like a stretched argument to make a (sponsored?) point that indeed Microsoft's development is *actually* more Open than Linux's. Another howler along the line of Microsoft's TCO win over Linux.Why the convultions?
The article seems to use the word 'centralised' an awful lot! That word The Linux Kernel's Fuzzy Future (InformationWeek)
always comes across with a connotation that is somewhat icky to me, but I
guess power-hoarding sociopaths (aka: high-level suits) are turned on by
it! ;-)
Isn't Linus on record as saying that one of the reasons Linux advances as The Linux Kernel's FuzzyFuture (InformationWeek)
fast as it does is because it /doesn't/ have such structure? I seem to
remember reading a statement to that effect, that such "roadmap" based
work would have shoehorned Linux into something much different, and far
more limited, than what it was even with 2.0, let alone a modern kernel.
As others have said, there'd have been no way to predict three years back,
what we'd have today, and that has /always/ been the case with Linux. A
six-month to one year "projection" might arguably be possible, and indeed
there was such a thing pre-2.6, but get much farther out than that, and
it's really not possible. Even a year would be stretching it beyond all
practicality, in general, altho it might be possible for individual areas
of work.
Duncan
Why does that matter? maybe they should list of the missing things that they need and the reasons why and if they are compelling they will get added, probably within 3 years.The Linux Kernel's Fuzzy Future (InformationWeek)
We've been told, by people with enough OSS credibility that they should know, that the OSS community generally doesn't want input on what they should be building, it just wants code. That is, "Don't tell us what you want, build it and give it to us and, if it looks to us to be good enough and if we think enough other people might want it, maybe we'll put it in the mainstream."The Linux Kernel's Fuzzy Future (InformationWeek)
I'll answer in the classic free software way: The Linux Kernel's Fuzzy Future (InformationWeek)
Where is your patch?
If you want a 5 year plan, then write one and fund it.
In other words, if the feature is so valuable, you or someone will do it. If it isn't
valuable, it won't happen.
Derek
In many ways the Linux kernel more resembles a market, with many internal Linux as a market
vendors competing, than a product in the traditional sense. This article
seems not to have understood this yet. You cannot describe one road map
for a market. Who would have the authority to create one? That doesn't
mean there aren't any; each vendor probably has one for their respective
product. This should be welcome news for journalists who can highlight
general directions.
Hmm... That's a unique but interesting metaphor. Linux as a market
Of course, sticking with that metaphor, the reason it hasn't been done yet
is that anyone sufficiently competent at market prediction to predict a
complex market like that three years out, isn't likely to stick around
writing Linux market update reports... <g> They'll be too busy trying to
figure out how to spend the billions they've made on the stock market. <g>
Duncan
This article is incredible clueless. Such articles used to be pretty common -- around the time Lwn was started, when basically no journalist understood what we're doing. Lately there are more clueful reporting, but as demonstrated here, there's still some journalists clinging to the stone-age.Incredibly Clueless
The article's author is making the same mistake as many journalists -Bogus Comparison
confusing the kernel with the distribution.
the details of the next Great Leap Forward. The issue is that these tend
to list OS features, some of which required kernel enhancements.
up there because they enable user land features that will keep the user
happy - be that user Aunt Tilly or a Google PhD.
the GNU/Linux development process is very different to the other two
examples. The point is that for a fair comparison you must compare the
development planning process for Microsoft Windows with that for the
GNU/Linux system as a whole - not just the kernel.
people like leading distributions, to major application developers in a
position to influence kernel and system development (Oracle etc.), to the
average hacker with itchy fingers.
great plans that the marketing department love. The path that Linux takes
is demand driven - not marketing driven. It is determined by a
heterogeneous bunch, but in the end it provides what the people who
really use the system really need. And with a lot fewer broken promises.
I think the LWN editors are uniquely well positioned to publish a 3 year Linux roadmap that, while just your point of view, could be a baseline that everyone references (if just to disagree).LWN should present Linux's 3 year plan
Well. LWN /does/ present an annual predictions for the coming year (on LWN should present Linux's 3 year plan
Linux and the open source community in general) article, with an analysis
of how well last year's predictions did. That could be thought of as a
one-year roadmap of sorts. I don't think anybody that knows the territory
is brave enough to make even semi-serious projections out much further
than that.
I suppose LWN could update it every six months, perhaps every quarter, and
post a link to that to generate PR.
... Which reminds me... it's getting that time of year again. 2005's
round of annual predictions and look-back articles should be already being
planned, and for actual paper magazines, they're likely already in mock-up
if not at the press, by now. It'll be interesting to see how LWN did and
what it predicts for next year. Maybe I'll have to go find the 2004
article and reread it, just to get a clue where LWN will be going with the
next one. <g>
Duncan