|
|
Subscribe / Log in / New account

Avoiding the tar pit

This Washington Post article is one of many expressing disappointment with Microsoft's Vista release, which is famously late and which has failed to live up to Microsoft's early promises. The article claims that the problems are not specific to Microsoft:

The sad truth is that Microsoft's woes aren't unusual in this industry. Large-scale software projects are perennially beset by dashed hopes and bedeviling delays. They are as much a tar pit today as they were 30 years ago, when a former IBM program manager named Frederick P. Brooks Jr. applied that image to them in his classic diagnosis of the programming field's troubles, "The Mythical Man-Month."

In this context, it behooves us to ask: is there a free software tar pit in our future? What can we do to avoid a grim future where we bog down, our software collapsing under its own weight?

Looking at the state of the free software community now, it is tempting to say that, so far, we have nicely avoided the tar pit. But have we? Here are a few dates from the past which may be of interest:

  • The 2.2.0 kernel was released on January 26, 1999.
  • 2.4.0 came out on January 4, 2001.
  • 2.5.1 - the beginning of the next development series - was released on December 16, 2001

The 2.5 development series was stalled for almost one full year while 2.4 reached a state which actually approached stable. Overall, the process from 2.2.0 to a stable 2.4 took almost three years; the kernel was in a "feature freeze" state for about two of those years. This was a time when quite a few people - many of them kernel developers - felt let down by the development process. This, your editor would attest, was a tar pit era.

One might well argue that the kernel has not yet escaped that tar pit. Like Vista, we lack a shiny new next-generation filesystem; the only credible attempt at such a filesystem (reiser4) remains in a stalled, feature-reduced state. It seems likely, however, that most observers would agree that the tar pit has been left far behind. The kernel development process has been humming along at a high pace, delivering interesting new releases every few months. The same story can be seen in many other parts of the free software community.

If we accept that things have gotten better, it can be interesting to look at why. One hint can be found in the same article:

Without that discipline, too often, software teams get lost in what are known in the field as "boil-the-ocean" projects -- vast schemes to improve everything at once. That can be inspiring, but in the end we might prefer that they hunker down and make incremental improvements to rescue us from bugs and viruses and make our computers easier to use. Idealistic software developers love to dream about world-changing innovations; meanwhile, we wait and wait for all the potholes to be fixed.

Any successful free software project must get good at fixing potholes; in the worst case, users (and distributors) will do the job for themselves. In a well-managed project, the people who are trying to improve the whole world will not get in the way of the pothole fixers. There is no single team, charged with all the development on a project, which can get bogged down in that way.

A "well-managed project" must find a way to keep whole-world improvements from stopping everything else, however. The older, multi-year kernel process did not always succeed on that front; the attempt to improve the entire kernel ended up bogging down the entire process. The new kernel development model, with its short release cycles, has caused some developers to complain that it is no longer possible to make major changes that require a long time to settle down. To the extent that this complaint is true, it should maybe be seen as a good thing. By only merging changes which can be brought to a releasable state within a month or two, the new process sidesteps the tar pit and keeps the development machine running.

One of the key suggestions in The Mythical Man Month is the formation of "surgical teams" to support the lead programmer(s). Some of the team members - such as the clerk who "keys in" the code - seem a little quaint now. But the idea that the people running the project (or parts of it) need lieutenants, documentation writers, tool makers, etc. still makes a lot of sense. Once upon a time, the kernel lacked much of that structure, with everything concentrating on a single developer - Linus Torvalds. Now there is a vast network of lieutenants. Quite a few developers focus their effort not on the kernel, but on the tools used by kernel developers. All that's missing are the clerks - and, perhaps, the documentation writers.

One of the biggest anti-tar pit technologies used by the free software community would have been hard for Mr. Brooks to imagine back in 1972: multiple, independent development teams. Any project of any size has a wide range of independent, sometimes conflicting development efforts happening at the same time. If one group bogs down, the others continue unhindered. The process may seem inefficient, given that a significant portion of the work which is done may never survive to a stable release. Throwing away code can be painful, but it is far less so than throwing away the entire project.

Peer review is also missing from the Brooks landscape. But peer review helps to ensure one of the things he thought was vital for a successful project: a clear conceptual architecture for the project. That architecture may take a surprising form: few free software projects have the sort of extensive design documentation that he probably had in mind. But a crowd of reviewers can help to ensure that new code is consistent with the principles behind a project - and that it is maintainable into the future. In this context, it is notable (and worrisome) that an increasing number of proposed kernel features are finding themselves stalled by a lack of reviews.

Finally, one should note that free software projects have mostly learned a sure-fire way to avoid a failure to live up to their promises: they don't make any. Vaporware tends to be scarce in this community; either the code exists or it does not. Very few projects are truly controlled by one corporation, so companies are also restrained in the promises they make about future releases; they are in no position to ensure that those promises are fulfilled. The relative freedom from marketing-driven promises helps free software projects avoid disappointments - but it also helps them to focus effort on objectives with a reasonable chance of success.

To argue that the free software community is immune to the problems of large-scale software development would be foolish. For all their growth, many or most components of a system like Linux are still a fraction of the size of their equivalents on certain proprietary systems. As our code base grows, there will undoubtedly be new challenges for those who would continue to develop it. But the free systems we have today must certainly far exceed the size of System/360 when Mr. Brooks was managing it, and we would appear to be going strong. With widespread community participation, improving tools, and the willingness to change our development models in response to real-world problems, we should be about to stay out of that tar pit for some time yet.


to post comments

Avoiding the tar pit

Posted Feb 15, 2007 2:25 UTC (Thu) by gallir (guest, #5735) [Link] (11 responses)

> many or most components of a system like Linux are still a
> fraction of the size of their equivalents on certain
> proprietary systems.

Hum... you are just too much kernel focused, but a working but still
basic GNU/Linux system is much more than that. If you compare it in size
to "some proprietary" like Windows you should add also Xorg,
KDE/Gnome, glibc, several shells and script languages, linux-utils,
usb-utils, initd+sysv, samba, and so on.

The above just confirm what you said before: independent groups. But the
aggregated system is comparable to any relevant and well-known propietary
integrated "product".

Avoiding the tar pit

Posted Feb 15, 2007 5:08 UTC (Thu) by flewellyn (subscriber, #5047) [Link] (3 responses)

True, a fully functional GNU/Linux is far more than the kernel. Like any OS, it has its userspace,
both system libraries and applications.

But where free software has a major advantage over proprietary offerings, in this case, is that the
components can be delivered piecemeal. With the exception of a few core components that do
need to synchronize somewhat (I'm thinking of the Linux kernel and glibc, as well as some
userspace tools for Linux kernel functionality), most of the programs on a GNU/Linux system can
develop at their own pace. The job of selecting versions to use, and integrating the various
components into a single, coherent system, is the job of distributors.

I think this decoupling of the development from the packaging of systems and applications is a
huge step forward. In a proprietary system where all (or the lion's share) of the system software
and apps are developed in house, a single program running into problems can delay the whole
release. This isn't so much the case with free software: if a distro is going to ship with Linux
kernel 2.6.19 instead of 2.6.20, so be it. Upgrades can come later as needed.

Avoiding the tar pit

Posted Feb 15, 2007 8:18 UTC (Thu) by tnoo (subscriber, #20427) [Link]

My analogy for this is a big building. Every part of the building is
built by the respective specialists that usually belong to independent
companies. If one of these companies fails, or if their quality is known
to be inferior, you choose a competitor. The same applies for the
building materials: there is a vast choice from many independent sources
that can be selected dependent on quality, price and taste.

The architects or engineers job is to select and assemble in a meaningful
way all the different pieces to make a whole usable building -- in the
free software world this is the distribution.

It is obvious that the only way deal with complexity is to break a system
it down into small, interchangeable pieces. Monolithic architectures
(like a all-in-one operating system) are bound to fail.

Avoiding the tar pit -- and U.S.P.T.O.

Posted Feb 15, 2007 11:14 UTC (Thu) by grouch (guest, #27289) [Link] (1 responses)

But where free software has a major advantage over proprietary offerings, in this case, is that the components can be delivered piecemeal.

Your friendly neighborhood monopoly and the USPTO are trying to retroactively alter that. See A Brave New Modular World - Another MS Patent Application -- "System and method for delivery of a modular operating system".

Avoiding the tar pit -- and U.S.P.T.O.

Posted Feb 15, 2007 15:00 UTC (Thu) by flewellyn (subscriber, #5047) [Link]

Well, that's really a separate issue. Patenting the obvious is just another way that the obsolete
monopolists try to fight back with a broken patent system.

Avoiding the tar pit

Posted Feb 15, 2007 9:07 UTC (Thu) by ldo (guest, #40946) [Link] (6 responses)

In fact, a study a few years ago showed that Debian was distributing several times as much code as the latest version of Microsoft Windows at the time. I suspect that some current distros can match or beat Vista in this regard.

Avoiding the tar pit

Posted Feb 15, 2007 18:57 UTC (Thu) by jospoortvliet (guest, #33164) [Link] (2 responses)

Don't be so sure - that Beast uses >600 mb ram to just SIT there... :D

Avoiding the tar pit

Posted Feb 15, 2007 20:44 UTC (Thu) by zlynx (guest, #2285) [Link]

Vista's 600 MB is not wasted. It's actually prefetching and preloading most useful DLLs. Some of it is used by actually executing code, true. But Vista can run in 256 MB (you have to install it into 512 MB then remove some RAM), so it can't be really using 600 MB.

Avoiding the tar pit

Posted Feb 17, 2007 8:21 UTC (Sat) by ldo (guest, #40946) [Link]

I was referring to code size, not system resource usage.

Avoiding the tar pit

Posted Feb 15, 2007 20:25 UTC (Thu) by martinfick (subscriber, #4455) [Link]

And, in fact perhaps a comparison of Vista to a Debian release might be more appropriate than simply to kernel releases. I suspect that some might consider Debian to have had a few tar pits.

Here again, a difference to the proprietary world can be seen though, even while waiting for a Debian release to be official, much of the world has actually been using it. It is in fact usually usable for quite some time, long before the release.

Avoiding the tar pit

Posted Feb 17, 2007 8:13 UTC (Sat) by daf (guest, #27590) [Link] (1 responses)

I don't think comparing Debian to Vista makes much sense, as Debian includes a vast amount of stuff for which Vista has no equivalent. Vista doesn't come with office programs, genealogy software, development tools, scientific software, games, instant messaging servers, etc. It's difficult to make a direct comparison, but it's worth noting that while all of Debian etch takes up 22 CDs, a useful system can be installed from just one.

Avoiding the tar pit

Posted Feb 17, 2007 8:23 UTC (Sat) by ldo (guest, #40946) [Link]

I don't think comparing Debian to Vista makes much sense, as Debian includes a vast amount of stuff for which Vista has no equivalent.

Which just reinforces my point. Even with all that extra functionality, Debian and other major Linux distros still manage release schedules that are about an order of magnitude faster than Microsoft can.

Avoiding the tar pit

Posted Feb 15, 2007 4:18 UTC (Thu) by ordonnateur (guest, #6652) [Link] (2 responses)

"Throwing away code can be painful, but it is far less so than throwing away the entire project."

As Brooks said: "Make one to throw away, you will anyway."

Avoiding the tar pit

Posted Feb 15, 2007 10:34 UTC (Thu) by bkoz (guest, #4027) [Link] (1 responses)

Yep.

Also, the throw away is less painful than dealing with a half-finished check-in. (Zack Weinberg's term of incomplete transitions comes to mind. This is where a big change is checked in, but the old way of doing things is never removed, or not all code is updated to use the current thinking.)

An interesting study that we'll probably never see would be to evaluate the number of incomplete transitions in proprietary code bases vs. free software code bases. (For large projects: the kernel, gcc, or KDE come to mind.)

This would be a way to qualify the effectiveness of the pot-hole fixers.

ps. thanks for the article about meta-software dev issues. Any F. Brooks-quoting article is bookmarked by me.

Maintenance Biblio

Posted Feb 15, 2007 10:40 UTC (Thu) by bkoz (guest, #4027) [Link]

Zack's paper about maintenance and incomplete transitions is in the 2003 GCC developer's summit. It's called "A Maintenance Programmer’s View of GCC" and is available here:

http://www.linux.org.uk/~ajh/gcc/gccsummit-2003-proceedin...

I would be interested in seeing other analyses for other projects.

2 examples I can think of

Posted Feb 15, 2007 8:43 UTC (Thu) by bronson (subscriber, #4806) [Link] (2 responses)

I think Firefox has been in a tar pit for the last year. 2.0 is basically just 1.5 with some extensions glued on. It will be interesting to see how long it takes to break free.

And XFree was a perfect example of free software lodged deep in tar. If it wasn't for keithp and X.org, I shudder to think of what using Linux would be like today.

2 examples I can think of

Posted Feb 15, 2007 20:16 UTC (Thu) by tetromino (guest, #33846) [Link] (1 responses)

IMHO, the canonical example of a project in a tar pit is Perl6. It has been in development for over 6 years, primarily because the Perl community decided to rewrite absolutely everything at the same time. Guido van Rossum's strategy of frequent, incremental improvements now seems like a wiser choice...

2 examples I can think of

Posted Feb 22, 2007 11:58 UTC (Thu) by arcticwolf (guest, #8341) [Link]

You're comparing apples and oranges here, although your confusion is understandable - despite what the name seems to suggest, Perl 6 is an entirely new language, not a new version of Perl 5. In fact, despite the fact that Perl 6 is in the works, Perl 5 is still being actively developed (and it's not just bugfixes that go into the tree), and new releases are still being made, just like before.

If you're familiar with LaTeX, Perl 6 is probably the equivalent of LaTeX 3; and FWIW, there's been some suggestions that Perl 6 should adopt a different name to make it explicit that it is a new language, too.

(And as for wiser choices, one might add that Perl 6 actually *is* going strong and coming along nicely, so it's not like it's a failed project, anyway.)

Six month development cycles

Posted Feb 15, 2007 9:10 UTC (Thu) by error27 (subscriber, #8346) [Link] (3 responses)

In the last couple years people have started adopting six month development cycles. Gnome and KDE do it. Fedora, Ubuntu and Suse all do it. So that feeds back to projects that want their code included in the distros. On a six month schedule if you hit a delay, it's not so bad. Like Fedora and Ubuntu were both delayed a week or so. No big deal.

Last December Gnome decided not to do a Gnome 3.0 massive rewrite. It sounded grand on paper to throw out all the rules but in real life, uh... not so much. Instead they're just going to carry on with the six month releases.

That seems like a good amount of time. It's as often as possible without overwhelming the users, translators and artists. Microsoft is at a disadvantage here because they have to wait two years between releases for business purposes.

Six month development cycles

Posted Feb 15, 2007 12:17 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

Actually although all the news items you'll find talk about Fedora being delayed for a week or so, you'll notice that those news items are spread over quite a long period, much as Longhorn was never delayed by more than six months or so in any single announcement. In reality Fedora Core 6 delays added up to about a month compared to the original schedule, which is significant, and it still shipped with some immediately noticeable and hard to retro-fix bugs (particularly misidentifying install architecture). At best it was a learning experience for Red Hat and the Fedora community, at worst a sign of what might be still to come.

Six month development cycles

Posted Feb 15, 2007 17:59 UTC (Thu) by vmole (guest, #111) [Link] (1 responses)

Um, there are a lot of users who would really *prefer* a multi-year cycle. Doing an OS upgrade every six months is not really desirable in many environments. Sure, you can skip some of the updates, but then (often) the upgrade isn't really supported, which brings its own problems.

Which is, of course, why Debian stable and RHEL (and others) exist.

Six month development cycles

Posted Feb 16, 2007 23:25 UTC (Fri) by JohnNilsson (guest, #41242) [Link]

Ubuntu will probably support LTS to LTS upgrades.

Avoiding the tar pit

Posted Feb 15, 2007 10:55 UTC (Thu) by ms (subscriber, #41272) [Link] (18 responses)

> In this context, it is notable (and worrisome) that an increasing number of proposed kernel features are finding themselves stalled by a lack of reviews.

It may be worth thinking about this for some considerable time. Which computing degrees actually teach anything substantive about OS design? Which even teach any thorough course on C? The degree I've done (Imperial) has a lab on Minix which we all hated and is now a Linux kernel-based lab. There are maybe 1.5 courses on OS design. All the lecturers hate C with a passion (I have no interest in starting a flame-war here) and so that attitude is adopted by students. In my professional experience I have never had to write any code in C at all.

My point is this: the number of competent software engineers that have anything like a solid understanding of C are decreasing, and I would guess, decreasing rapidly. Microsoft have rewritten large sections of Vista in C# and are even developing a new kernel purely in C# (no, I doubt this will ever be released or even used). How long will it be before the increasing complexity and decreasing number of engineers start causing major issues for the Linux kernel? Now I'm sure that you can find stats that show the number of developers in the kernel has increased, but I simply can't believe this is sustainable. Discuss!

Avoiding the tar pit

Posted Feb 15, 2007 13:28 UTC (Thu) by pizza (subscriber, #46) [Link]

>It may be worth thinking about this for some considerable time. Which computing degrees actually teach anything substantive about OS design? Which even teach any thorough course on C?

When I was at Georgia Tech, I took many OS-level courses. I left right after they did a major overhaul of their CS program, but even post-overhaul, there were several parallel tracks one could take, including lots of hard-core OS theory, complete with hands-on Linux kernel hacking.

Granted, one could choose to focus on something entirely different (like visualization or human-computer-interaction) but the choice was there. Or at least it was a choice when I gradumicated back in 2000. :)

I don't know if they still offer that sort of thing, but I find it hard to imagine that they wouldn't.

(If a certain Jim Greenlee is reading, I still think CS2340 aka _Control & Concurrency_ was one of the most valuable classes I took!)

>Now I'm sure that you can find stats that show the number of developers in the kernel has increased, but I simply can't believe this is sustainable.

While the *rate* of kernel hacker-type graduates may be decreasing, the overall number is still increasing simply due to a much larger overall population.

Avoiding the tar pit

Posted Feb 15, 2007 16:08 UTC (Thu) by tjc (guest, #137) [Link] (5 responses)

My point is this: the number of competent software engineers that have anything like a solid understanding of C are decreasing, and I would guess, decreasing rapidly.
C is number two on the TIOBE index (http://www.tiobe.com/tpci.htm), and decreasing very gradually. It's unlikely to decrease into obscurity, since C is the lingua franca for APIs, and the only viable alternatives for systems programming are C++ and D.

Avoiding the tar pit

Posted Feb 15, 2007 16:16 UTC (Thu) by ms (subscriber, #41272) [Link] (3 responses)

> C is number two on the TIOBE index (http://www.tiobe.com/tpci.htm), and decreasing very gradually. It's unlikely to decrease into obscurity, since C is the lingua franca for APIs, and the only viable alternatives for systems programming are C++ and D.

Yeah I've seen that index before and had long discussions about why it's probably not wise to take it too seriously. I would suggest that you actually only *need* to use C/C++/D in a number of very very small cases. Most of the time if your language of choice has sensible support for foreign function calls then you can keep the C/C++/D to a minimum and use something a bit nicer for the rest. Take, for example, House, (http://www.cse.ogi.edu/~hallgren/House/) or any number of the OSs written mainly in Java or, as I mentioned, C#.

As I said however, I don't want to start another language flame-war. But I do disagree with your assertion that C, C++ and D are the only possible languages for "systems programming".

Avoiding the tar pit

Posted Feb 15, 2007 16:53 UTC (Thu) by tjc (guest, #137) [Link] (2 responses)

I would suggest that you actually only *need* to use C/C++/D in a number of very very small cases. Most of the time if your language of choice has sensible support for foreign function calls then you can keep the C/C++/D to a minimum and use something a bit nicer for the rest.
I see that you are of the "C is best avoided" persuasion. I also notice from your initial post that you have never programmed in C. You might like it better if you had some experience with it.
Take, for example, House, [snip] or any number of the OSs written mainly in Java or, as I mentioned, C#.
I've often wondered, how does one write an OS in a language that compiles to bytecode? I suspect that at the bottom level there is some C and assembler in there somewhere. You can't write an OS without touching the hardware, obviously.

Avoiding the tar pit

Posted Feb 15, 2007 18:30 UTC (Thu) by AJWM (guest, #15888) [Link]

> how does one write an OS in a language that compiles to bytecode?

Clearly, you can only do it if the OS is for hardware that runs that bytecode. Hardware Java machines exist (mostly for embedded apps), and long ago Western Digital created a P-engine for UCSD Pascal's P-code (based on rewriting the microcode for the LSI-11 chipset, IIRC).

(Arguably the old Burroughs large systems (B6700, etc) were similar, since there was no assembly language and the systems programming was done in a variant of Algol called Espol. They cheated a bit, however, since Espol had a built-in array called 'memory[]', which did about what you'd expect.)

Avoiding the tar pit

Posted Feb 16, 2007 18:55 UTC (Fri) by dvdeug (guest, #10998) [Link]

I've tried C. I don't like C.

There's a lot of alternatives to C. Ada was designed for that type of work, for example. The Oberon system was written in Oberon. There's no reason why Java has to compile to bytecode; gcj is there for you, of course. And, yes, I'm sure there's assembler in there somewhere, just like there is in every C based operating system.

C flamewar

Posted Feb 17, 2007 8:31 UTC (Sat) by ldo (guest, #40946) [Link]

C is number two on the TIOBE index (http://www.tiobe.com/tpci.htm), and decreasing very gradually.

Interesting that the number-one item on that list (Java) is showing an even greater rate of decline.

C is what you might call the "nuts-and-bolts" language of choice. Whatever your favourite language might be, it is almost certainly implemented in C. Whereas in the early days of PCs (1970s/early 1980s) it was impossible to do any system-level programming without knowing some assembly language, nowadays you can't do it without knowing C.

Avoiding the tar pit

Posted Feb 15, 2007 16:18 UTC (Thu) by IXRO (guest, #39871) [Link] (10 responses)

> My point is this: the number of competent software engineers that have anything like a solid understanding of C are decreasing, and I would guess, decreasing rapidly.

I don't think so. I think the number of engineers that have less than solid understanding of C is decreasing rapidly, because they migrate to higher level languages.

Avoiding the tar pit

Posted Feb 15, 2007 18:35 UTC (Thu) by ajross (guest, #4563) [Link] (9 responses)

I'd suggest that the number of competent C developers was never that high to begin with, honestly. It's just that the past decade and a half has opened up the world of "development" to people who would otherwise never have learned to program at all. Their world is one of Javascript and VB, and we are all richer for it.

But to take from that that "C is dying" is just silly. It's just as useful for all the same tasks it always was. And many of its perceived shortcomings (raw memory indirection, for example) in fact aren't all that much a problem at all when using modern tools. I've done professional development both in Java (with checked arrays and typing) and in C (with regular valgrind use) and can honestly say that I'm equally productive in both environments.

Avoiding the tar pit

Posted Feb 16, 2007 3:06 UTC (Fri) by rjw (guest, #10415) [Link] (8 responses)

"I'm equally productive in both environments"

As someone who has just spent the last week debugging production problems in a messy C++ codebase, I very much doubt it. This involved replacing memory management functionality, and dynamically patching in assembly thunks into vtables just to work out what the hell was going on. So the "you just hate C because you don't understand it" response is not an appropriate one. I hate people where I work using C and C++ because it encourages them to do things that they can't fix, or even know what is happening. And no, its not just the C++ features that get people into a mess, its also the C ones, and the dodgy emulations of higher level features people create in these languages. If you really are as productive in C as Java, I'm sure you have your own fake OO system (structs of function pointers and data maybe indirected a bit), or "pass around a function pointer and a void*" psuedo closure convention. These are *nasty*, but if you don't have them, there is literally no way you can pretend to be as productive as in Java. Unless you program purely procedurally in Java. In which case....

There is simply no way that the mess people can get themselves into in languages with lax type systems ( ie void*, object slicing, reinterpret_cast etc) is an acceptable price to pay for the percieved or actual performance improvements, or the actual advantages like real templates (which are still less useful than macros + real parametric polymorphism) . The ease of debugging is enough to make you plump for Java. And to say you are just as productive in C belies the fact you have never used a decent refactoring environment, ie Eclipse or IntelliJ. These are useful enough for undoing mistakes (that will always be rife) that I would even advocate java + eclipse over "dynamic" or functional languages for a large number of projects - especially those involving a lot of programmers with no clue - ie most "enterprise" ones.

Avoiding the tar pit

Posted Feb 16, 2007 6:17 UTC (Fri) by bronson (subscriber, #4806) [Link] (7 responses)

Wow, that's a pretty accute case of language snobbery!

I contracted two years ago trying to add some database features to ~60 kloc of Java backend code. It was death by abstraction. BufferedStreamReader this, UmbrellaHackException that. Never have I seen so much code produce so little functionality. I could have rewritten it in about 10 kloc of C+glib+etc. Does this mean that Java sucks?

Crap code exists everywhere. Despite your protestations, rjw, Java's handcuffs won't save you from scary programmers (check the Daily WTF if you don't believe me). If a program's fundamentals are wrong, it's usually faster to just rewrite it than bang away on Eclipse's Shift-Alt-T for years. I use Eclipse every day nowadays. It's decent, but it's no panacea.

Avoiding the tar pit

Posted Feb 16, 2007 18:58 UTC (Fri) by dvdeug (guest, #10998) [Link] (5 responses)

So saying that one language is better than another is now language snobbery? How can you dismiss people who deal with problems in C that other languages don't have by design, and say it's just language snobbery?

Avoiding the tar pit

Posted Feb 16, 2007 19:10 UTC (Fri) by ajross (guest, #4563) [Link] (1 responses)

How can you dismiss people who deal with problems in C that other languages don't have by design, and say it's just language snobbery?

Stop flaming. Some of us use C and like it, and neither we nor the language are going away. If you can't handle that fact, and insist on engaging in flames even where none were provoked, then yes: you are a snob.

Make your design decisions, write your code, and make it work well. Presuming to tell smart people how to write their code is just random fanboi noise, and doesn't really belong in this forum.

Avoiding the tar pit

Posted Feb 16, 2007 20:05 UTC (Fri) by dvdeug (guest, #10998) [Link]

So saying that someone has an acute case of language snobbery is not a flame, but pointing out that there's valid points for the argument is a flame? Bringing up a claim that C and Java are equally productive is likely to bringing counter-claims, one of which was promptly attacked.

The question at the start of this topic is "What can we do to avoid a grim future where we bog down, our software collapsing under its own weight?". That is, this forum is all about design decisions and talking about what smart people should do.

Avoiding the tar pit

Posted Feb 17, 2007 10:37 UTC (Sat) by bronson (subscriber, #4806) [Link] (2 responses)

Yep! Just like saying Ford is better than Chevy is snobbery, or vi is better than Emacs, or San Francisco is better than New York. There are rational aguments that can be constructed, yes, but none of them boil down to "better".

To say that C suffers from problems that Java has fixed, and at the very same time ignore the many Java problems that C solves, well... I gotta admit, that sounds snobbish to me. Each language has its merits. Is Java better than PostScript? Or Forth? Or Lisp?

Generally pouring dirt on someone's choice of programming language is not productive, especially if you offer little more than personal anecdote to back it up.

Avoiding the tar pit

Posted Feb 17, 2007 15:15 UTC (Sat) by dvdeug (guest, #10998) [Link] (1 responses)

A tool is a tool, and rational people can discuss whether it's a better or worse tool than another tool without throwing around attack words like snobbery. Making things personal does nothing to rationally analyze the benefits and drawbacks of a tool.

Avoiding the tar pit

Posted Feb 17, 2007 23:23 UTC (Sat) by bronson (subscriber, #4806) [Link]

You're right, I worded that poorly. Instead of, "wow, that's a pretty accute case of language snobbery," I wish I had said "wow, that's a good example of language snobbery." I only wanted to ridicule the argument, definitely not the person doing the arguing.

Snobbery describes the act of advocating a single position while utterly ignoring merits of opposing positions. Your statement claimed that C has flaws and totally dismisses any possibility that it might also has benefits that overcome its flaws. If you can tell me a more appropriate word to use for this type of argument I'd be happy to use it.

If you'd like to continue to express your dislike of C, I'm probably interested in what you have to say. But I hope that you will use reasoned arguments and back them up with real world examples. Personal anecdotes and vague assertations aren't convincing and tend to impede discussion.

Personally, I have written tens of thousands of lines of Java and hundreds of thousands of lines of C. My position: Java has a problem domain that it solves very well, and C has a problem domain that it solves very well. At this point neither language can possibly be considered "better" than the other.

(I'm going on a 7-day unwired vacation tonight so a reply might be somewhat delayed...)

Avoiding the tar pit

Posted Feb 18, 2007 15:02 UTC (Sun) by rjw (guest, #10415) [Link]

Look, I don't even paticularly *like* Java. Theres all kinds of issues with it. But it seriously has the best tools available, for both debugging and coding, and we are talking about productivity, not purity, or holiness, or personal preferences...

There are certain classes of error that you are just not exposed to in Java, that you are in C:

- memory nightmares - unclear ownership, double frees, etc. Yes, you can get reference leaks in Java. With a heap inspector, these are really very easy to find. Yes, you can use smart pointers in C++, or a reference counting convention. Or even add in the boehm collector and hope that none of your values look a bit like pointers. But it still comes up again and again - and these problems do plague code written in C.

- home grown type systems. Less of a problem in C++, but really, show me any decent C program of a reasonable size that doesn't have some OO or closure hack hanging about. ( can we say gobject?) And please, lets not pretend that it doesn't take a bit of time to get these working decently, and the vast majority of them are very painful to debug.

Really, I don't mind C or C++ for programming things myself. I can navigate the potholes, because I know where they are. But it will take me longer, as I'll have to be worried about non-issues. The ones Java has for the vast majority of code people have to write are a lot shallower. There *are* advantages of C and C++. But those advantages are wildly outweighed by the disadvantage of either ending up with all kinds of nasty problems, or having a *very* very expensive hiring process - remember programmers often need to be hired for combinations of skills including knowledge outside - the damage that can be done in Java is limited in comparison.

"Despite your protestations, rjw, Java's handcuffs won't save you from scary programmers" - Way to ignore my entire post. Crap programmers can do less damage in java. Its very easy for people to think they know C well without knowing what the hell they are doing. I'm not saying this applies to you, but to the general ecosystem.

"I could have rewritten it in about 10 kloc of C+glib+etc"
and I would love to know what aspect Java would have prevented you writing it in less than that, and what added cost of maintenance it would have added for your client - "* Must know glib" is the kind of line recruitment consultants love to see.

Seriously, its not snobbery, its pragmatism.

boil-the-ocean vs potholes

Posted Feb 15, 2007 15:08 UTC (Thu) by rfunk (subscriber, #4054) [Link]

The contrast between major overhauls and incremental improvements reminds me of my
worries over KDE4. I hope they don't get so caught up in the overhauls that they get
mired, or miss the potholes, or just "fix" what isn't broken.

shiny new next-generation filesystem?

Posted Feb 15, 2007 22:24 UTC (Thu) by brouhaha (subscriber, #1698) [Link] (8 responses)

Like Vista, we lack a shiny new next-generation filesystem; the only credible attempt at such a filesystem (reiser4) remains in a stalled, feature-reduced state
Are JFS and XFS not shiny and new enough to suit you?

Maybe the reason reiser4 remains in a stalled, feature-reduced state is that it is far too shiny and new?

shiny new next-generation filesystem?

Posted Feb 15, 2007 22:41 UTC (Thu) by bronson (subscriber, #4806) [Link] (5 responses)

Considering XFS was started in 1993 and JFS in 1991, I'd hardly call either of them new! They're good for certain task loads but, no, they aren't shiny.

shiny new next-generation filesystem?

Posted Feb 16, 2007 0:01 UTC (Fri) by brouhaha (subscriber, #1698) [Link] (4 responses)

Personally I don't care *when* they were written; I'm only interested in whether they provide the features and performance I need. Thus far they do. They are sufficiently "new and shiny" as compared to Ext2/Ext3.

What compelling advantage does Reiser4 purport to offer over JFS and XFS? I know it's supposed to have some amazing tree-based structure "under the hood", but what will that really buy me?

shiny new next-generation filesystem?

Posted Feb 16, 2007 23:39 UTC (Fri) by JohnNilsson (guest, #41242) [Link] (2 responses)

Reiser4 promises an API imrpovement. That its also a good filesystem implementation is just a bonus. The compelling features are plug-ins, usable value-per-file and file-as-folder semantics.

shiny new next-generation filesystem?

Posted Feb 17, 2007 1:15 UTC (Sat) by cantsin (guest, #4420) [Link] (1 responses)

The question is whether these API changes don't break Unix semantics and introduce security/stability issues, see the discussion on Reiser4 on lkml. And shouldn't filesystem/VFS plug-ins be done in userspace? Unless I am missing something, it appears that FUSE implements this functionality in a safe, clean and filesystem-transparent way. Last not least, it's a solution that works today instead of a rather vague future promise. (I first heard Hans Reiser talking about ReiserFS plug-ins in ca. 2001.)

shiny new next-generation filesystem?

Posted Feb 17, 2007 1:52 UTC (Sat) by brouhaha (subscriber, #1698) [Link]

The earliest I'd heard of file systems with plugins (to create virtual directories or files) was around 1986 in the development of the Intel/Siemens "Gemini" project, which was in a sense a successor to the Intel iAPX 432, and which they attempted to commercialize under the name "BiiN". The original specification for the Osiris operating system had this capability, though I'm not sure if it was present in the delivered release. BiiN was ultimately a commerical failure, as had been the iAPX 432.

The suggested applications were things like integrating revision control into the file system, and making archives (e.g., tar files) into virtual directories.

Although these ideas are great in theory, and perhaps even of some value in practice, I agree with cantsin that they are better implemented above the filesystem interface, rather than having plugins that can only work with one filesystem but not others. This could be done in the kernel above the VFS layer, but as cantsin suggests, user space (e.g. FUSE) is probably best.

shiny new next-generation filesystem?

Posted Feb 19, 2007 15:35 UTC (Mon) by jschrod (subscriber, #1646) [Link]

If you want to look at an innovative file system of today, look at Solaris' ZFS. Combining some functionality that was traditionally in volume managers and in classic file systems made a really spiffy new technology.

shiny new next-generation filesystem?

Posted Feb 16, 2007 20:21 UTC (Fri) by dvdeug (guest, #10998) [Link]

New and shiny I believe refers to more than just faster. I believe Vista was trying for some new database code, and Reiser4 has a bunch of features like metadata directories. Except for speed, I believe JFS and XFS from the outside look a lot like the original Unix file systems with longer names and a few more metadata bits.

I'd like to think that there's more to be done with filesystems then was complete 30 years ago, but it does seem like the basic concepts behind the filesystem are quite stable.

shiny new next-generation filesystem?

Posted Feb 22, 2007 15:52 UTC (Thu) by alext (guest, #7589) [Link]

I agree. Why do we need a shiny and new filesystem? My reaction was
surely the Unix way would be to have a good filesystem and build what you
want on it. For example a filesystem that handles very large files
quickly for changes possibly (I have no figures it is just an out of my
head comment) supports a database. A suitable database is the gate way
for a lot of other services.

Seems to me making a super complex filesystem with database like meta
data is just adding to the size and complexity of the code base asking
for a tar pit to open?

Avoiding the tar pit

Posted Feb 16, 2007 4:46 UTC (Fri) by Max.Hyre (subscriber, #1054) [Link] (3 responses)

Don't forget Free Software's freedom to ignore backward compatibility: every so often the mess exceeds some developer's aesthetic threshold, and a section gets ripped out and (more) cleanly recoded. Not doing that periodically leaves a trail of tar pits behind, each of which must be worked around by all new code.

There are two aspects to this.

First, and the lesser, is that the process is affordable. Someone wants to do it, and there's no one to insist otherwise. If you've ever tried that in a proprietary setting, you know how quickly you get slapped down for ``unproductive'' effort. Since the old stuff works well enough for the users (they bought it last time, didn't they?), there's no percentage in making it work better. Adding shiny new stuff is where the bucks are.

The second, greater, is that Free Software can afford to blow away backward binary compatibility. This is because most of the code's clients are also Free Software. This means many changes can be handled by a recompile, and the rest can be recoded to match. Witness the kernel's cavalier attitude toward the ABI.

I wonder how much NT and its offspring could be cleaned up if they didn't have to run unaltered binaries from the MS-DOS days. Does Vista still do that?

Avoiding the tar pit

Posted Feb 16, 2007 12:18 UTC (Fri) by copsewood (subscriber, #199) [Link]

I wonder how much NT and its offspring could be cleaned up if they didn't have to run unaltered binaries from the MS-DOS days. Does Vista still do that?

NT supported MSDOS by using a DOS emulation box. Vista will support 95/98 through use of virtual machines. The Microsoft virtual PC product is a good enough VM that you can even run Ubuntu/Linux on it - I have, and it works with consumer XP even though it doesn't claim to.

I think virtualisation and emulation are significant developments partly because these techniques allow newer technical architectures to provide backwards compatibility to older ones without getting bogged down by them. This approach is also usable in Linux to support older binaries if you prefer not to recompile or have lost the source, or never had source in the first place.

Emulation has also been used for many years to test software on machines in the process of development that don't yet exist. Tracy Kidder wrote about this in his very readable book "Soul of a new machine".

Avoiding the tar pit

Posted Feb 16, 2007 20:13 UTC (Fri) by dvdeug (guest, #10998) [Link]

Then again, Free Software frequently doesn't ignore backward compatiblity. X predates Microsoft Windows, and Unix predates DOS. We still use shells that are compatible with the Bourne Shell. A lot of Free Software (particularly old GNU code) is littered with code to support systems that are ancient. It wasn't until C89 was 20 years old that GCC finally dropped support for systems without a C89 compiler, and it still supports a number of old dying systems. On the other hand, new Windows code supports Intel x86, ia64, and AMD64 chips running a couple versions of Windows.

Avoiding the tar pit

Posted Feb 17, 2007 8:01 UTC (Sat) by daf (guest, #27590) [Link]

While the kernel developers may modify their internal interfaces on a regular basis, they're fairly strict about maintaining the userland interface. Changing that would be incredibly disruptive.

Filesystem tar pit

Posted Mar 6, 2007 12:46 UTC (Tue) by appie (guest, #34002) [Link] (1 responses)

I'm surprised Microsoft didn't replace their WinFS RDBMS based overdone concept with something equally 'new' but far less bloated: making use of file metadata to store tags and redefine the access to data (i.e. files) by storing and selecting a description of the data (i.e. tags) instead of the really, really, tired old concept of directories and files.

This is an area where FOSS could showcase an innovative approach.
Full text indexing like Spotlight is nice, but I have a sneaky suspicion it doesn't scale. Plus it isn't exactly user supplied meta data.

I hope a team of FOSS developers picks up the idea of using meta data in the various Linux filesystems to store tags and then come up with KDE and Gnome interfaces to store/select these tags.

I think it's far more intuitive to describe your data and make it possible for different users to describe the same data in their own individual tastes.
After all, creating a directory tree and coming up with a filename is a way of describing your data as well.
And instead of deleting/moving your data: tag the tags ("meta-tags").

I could go on and on, but i'll stop here.
Maybe someone can donate me another 8 hours a day, I could start working on it :)

Filesystem tar pit

Posted Mar 6, 2007 20:25 UTC (Tue) by zlynx (guest, #2285) [Link]

Vista does that already. You should play with a demo machine at CompUSA or Best Buy or something.

See this, tags are down near the bottom of the page:
http://www.microsoft.com/windows/products/windowsvista/fe...


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds