|
|
Subscribe / Log in / New account

The vi editor causes brain damage

From:  Marc Perkel <mperkel-AT-yahoo.com>
To:  linux-kernel-AT-vger.kernel.org
Subject:  The vi editor causes brain damage
Date:  Sat, 18 Aug 2007 22:20:34 -0700 (PDT)
Message-ID:  <964583.28477.qm@web52504.mail.re2.yahoo.com>
Archive‑link:  Article

Let me give you and example of the difference between
Linux open source world brain damaged thinking and
what it's like out here in the real world.

Go to a directory with 10k files and type:

rm *

What do you get?

/bin/rm: Argument list too long

If you map a network drive in DOS and type:

del *

It works.

That's the problem with the type of thinking in the
open source world. Why can DOS delete an infinite
number of files and rm can't? Because rm was written
using the "vi" editor and it causes brain damage and
that's why after 20 years rm hasn't caught up with
del.

Before everyone gets pissed off and freaks out why
don't you ponder the question why rm won't delete all
the files in the directory. If you can't grasp that
then you're brain damaged.

Think big people. Say NO to vi!


Marc Perkel
Junk Email Filter dot com
http://www.junkemailfilter.com


       
____________________________________________________________________________________
Yahoo! oneSearch: Finally, mobile search 
that gives answers, not web links. 
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC



to post comments

The vi editor causes brain damage

Posted Aug 23, 2007 12:15 UTC (Thu) by lysse (guest, #3190) [Link] (1 responses)

Ouch. My IQ went down half a standard deviation just READING that codswallop...

The vi editor causes brain damage

Posted Aug 23, 2007 14:27 UTC (Thu) by wilreichert (guest, #17680) [Link]

Guess I'm too brain damaged for that to make any sense at all

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 23, 2007 14:44 UTC (Thu) by sayler (guest, #3164) [Link] (9 responses)

http://thread.gmane.org/gmane.linux.kernel/571913

> It would be very handy if the argument memory space was expanded.
> Many years ago I hit the limit regularly on Solaris, and going to
> Linux with its comparatively large limit was a joy. Now it happens to
> me quite often on Linux as well.
>

done :)

commit b6a2fea39318e43fee84fa7b0b90d68bed92d2ba
Author: Ollie Wild <aaw <at> google.com>
Date: Thu Jul 19 01:48:16 2007 -0700

mm: variable length argument support

Remove the arg+env limit of MAX_ARG_PAGES by copying the strings
directly from the old mm into the new mm.

--
Paolo Ornati
Linux 2.6.23-rc3-g2a677896 on x86_64

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 26, 2007 8:58 UTC (Sun) by wolfgang.oertl (subscriber, #7418) [Link] (2 responses)

I hope this won't cause breakage in the detection of max. arg. length of autoconf.

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 26, 2007 23:22 UTC (Sun) by bronson (subscriber, #4806) [Link] (1 responses)

Shouldn't be a problem. Apparently autoconf starts large and works backward. (I'm just relaying a discussion I saw on LKML though; I don't have first-hand knowledge here)

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 29, 2007 13:30 UTC (Wed) by nix (subscriber, #2304) [Link]

Indeed it does (although actually it's libtool).

(IIRC, it used to start small and work up, but that was much too slow).

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 27, 2007 10:48 UTC (Mon) by liljencrantz (guest, #28458) [Link] (5 responses)

Awesome!

The xargs command is a horrible, evil kludge invented in an era where memory and processing power were many orders of magnitude more expensive. It has _no_ place in a modern operating system. Linux should have worked this way from the start.

I really hope that this patch will make it into the distros ASAP, it will make my life loads easier since I regularly run into this bug.

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 30, 2007 15:24 UTC (Thu) by welinder (guest, #4699) [Link] (4 responses)

xargs has very nice applications not related to very long command
lines. For example, it enables the "find" and "do" stages to work
in parallel:

find . -type f -print | xargs -n100 grep oink /dev/null

(Bonus points for knowing why that /dev/null is there and points
for fixing the space problems using the 0 flags.)

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 30, 2007 16:53 UTC (Thu) by stuart_hc (guest, #9737) [Link] (1 responses)

The /dev/null argument is there so standard grep will see at least two file arguments and therefore always output the filename of matches. GNU grep adds the extension -H or --with-filename making this trick unnecessary.

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 30, 2007 21:28 UTC (Thu) by dfsmith (guest, #20302) [Link]

I suspect it's more to do with not hanging on stdin when find doesn't find anything.

So, is there a reason not to make the arg. list length grow dynamicly?

Posted Aug 30, 2007 17:54 UTC (Thu) by zlynx (guest, #2285) [Link]

Speaking of parallel operations, use find with -n and -P for more fun. Really great on SMP systems.

IO is usually the limit, but some operations take more time, like a complicated perl -i transform over a whole Linux kernel tree, for example. Or xsltproc.

You can also get to a CPU limit instead of a disk limit these days by using compressed data, either gzipped on disk, or something like Reiser4 with file compression.

Speaking of which, I find most high end workstations quite disappointing these days. Designers need to spend more time adding fast disk.

print0 is an evil kludge

Posted Aug 31, 2007 14:35 UTC (Fri) by emj (guest, #14307) [Link]

find . -type f -print0 |xargs -r -0 -n100 grep oink

Note the -r

I wish there was a better way to do -print0, I mean you can't do a ls -0|xargs -0 ls to skip problems with spaces right?

The vi editor causes brain damage

Posted Aug 24, 2007 0:58 UTC (Fri) by jzbiciak (guest, #5246) [Link] (1 responses)

Explain to me again how an upper limit on argv[] has anything to do with /bin/rm?

The difference between DOS's del and /bin/rm is that one self-globs and the other relies on the shell to glob. I believe that UNIX architecture choice was made before vi was a twinkle in Bill Joy's eye.

In fact... Thompson's shell did support globbing via an external helper, /etc/glob. (Take a look in the source. It's there!)

By centralizing glob functionality in the shell, none of the other utilities need to know or care how it works. Shells can come up with all sorts of crazy ways to match filenames as a result, and it automagically works with everything.

What next, a complaint that 'mv' can't do a rename along the lines of "ren abc???.txt def???.txt"? Just because UNIX (and Linux) don't do it the DOS way doesn't mean it's inferior. It just means its different.

And, I heartily welcome this patch. Tell me how I'd do that with DOS again?

The vi editor causes brain damage

Posted Aug 24, 2007 12:02 UTC (Fri) by nix (subscriber, #2304) [Link]

Oh *thank* you for that link. I'd somehow never looked at the Thompson shell before, and it was... educational to see C used as it was intended to be used. (Note the extensive use of implicit int, register, and local `extern' for things like `extern errno'.)

exit.c (`seek to the end of stdin') and if.c were an education in themselves (the trick pulled with the ncom variable in if.c alone...)

Ah, those were the days. (Cramped and confining days, but still. Even more cramped and confining for me given that I was probably not even born.)

The vi editor causes brain damage

Posted Aug 25, 2007 16:00 UTC (Sat) by pm101 (guest, #3011) [Link] (5 responses)

His trolling aside, fundamentally, Marc was right. Regardless of technical reasons why it was done that way, and regardless of how hard/easy it is to fix, "rm *" not working on large directories is, fundamentally, brain-damaged, and someone ought to have fixed it before now. Most people's reaction to the troll was idiotic and immature at best.

The vi editor causes brain damage

Posted Aug 26, 2007 1:05 UTC (Sun) by bronson (subscriber, #4806) [Link]

I'm pretty sure that most of the people responding were trolling themselves... I only saw 3-4 intelligent responses in that whole mess. Hardly surprising.

You ever see that cartoon where a wolf in a sheep's costume stands up in a field of other wolves wearing obvious sheep's costumes and shouts, "Wait! Isn't anybody here a sheep??" I'm afraid a lot of LKML threads turn into that.

The vi editor causes brain damage

Posted Aug 30, 2007 5:42 UTC (Thu) by renox (guest, #23785) [Link] (3 responses)

In the 'Unix haters' book (which is quite old), one of their criticism is that the globbing is done by the shell instead of having it done by each command (with a common library to avoid variation) which would prevent situation like this..

I doubt that Linux will be able to fix this kind of Unix legacy, it still doesn't have a versioned filesystem by default..

The vi editor causes brain damage

Posted Aug 30, 2007 6:17 UTC (Thu) by bronson (subscriber, #4806) [Link] (2 responses)

While it's not quite perfect (the unexpanded glob pattern still gets passed back when no files could be found), "argument list too long" is fixed in 2.6.23. That goes a long way toward cleaning up this legacy.

And, does anything have a versioned filesystem by default? Even ZFS only has snapshotting. (Yes, I know VMS ruled for this... and only this.) If there really is huge demand for versioning (and I'm skeptical), it really doesn't seem too hard to add.

The vi editor causes brain damage

Posted Aug 30, 2007 9:51 UTC (Thu) by renox (guest, #23785) [Link] (1 responses)

>And, does anything have a versioned filesystem by default?

You answered yourself the question: VMS did.

> if there really is huge demand for versioning (and I'm skeptical)

There is really a huge demand for versioning, but it's hidden in the big number of 'how do I recover/undelete this file?' requests..

With a versioned filesystem, this would solve a big percentage of the issue except of course in case of hardware errors..

Sure you can say that loosing files is a motivation for doing proper backups, but as proper backups is still largely not done properly, it doesn't seem to work.

The vi editor causes brain damage

Posted Aug 30, 2007 16:48 UTC (Thu) by bronson (subscriber, #4806) [Link]

And there are soooo many people clamoring for an undelete feature in Ext3 / Reiser, etc...? You claim huge demand but all I see is a sopradic email message and LOTS of people happily living with ext3's "I zero the block pointers, haha!" anti-undelete feature.

Have you actually used VMS? It's a perfect example of why versioned filesystems haven't caught on! Adding a semicolon to roll back in time was easy, yes, but then you'd have to become very intimate with PURGE or you'll blow your quota by the end of the day. And you thought keeping your home directory small was hard in Unix! :)

My position: everybody agrees that versioning would be extremely useful. The problem is, of course, it comes at a cost: performance, capacity, and maintenance. And nobody, not Microsoft, Sun, Apple, Be, Linus, etc have figured out how to reduce the cost to where it's actually worth it.

Hopefully we discover in October that Apple has finally solved this one. If they can show how to do it right, I'll bet Windows and Linux won't be far behind!


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds