Posted Jul 21, 2006 16:42 UTC (Fri) by cventers
In reply to: inotify
Parent article: OLS: On how user space sucks
There is a difference between a vendor choosing to make i386 releases and
programmers refusing to use the features of any more modern chip simply
because a few i386 boxes are still out there clocking their ops. One of
the great things about having open source code is that you can download
and build your own packages optimized just how you choose (indeed,
distributions like Gentoo even make it easy). You're doing well if your
code will build for old hardware but otherwise make use of new features.
The problem with the least common denominator argument isn't really the
suckiness of the reality that the argument represents, it's the fact that
it ever gets used as an excuse to write code in which "sub-optimal" is a
Furthermore, the fact that different systems require different code to be
optimal is a fact of life. It's why we have abstraction layers at all. If
every system was the same, operating systems either wouldn't exist or
they'd be a hell of a lot more simple, and that goes for everything from
the bottom of the stack up. It's very much a reality, as you put it.
When you choose to support multiple systems, you should be ready to write
multiple implementations of the same function. Writing to the least
common denominator -- and not ever specializing -- is a cop-out.
> "Optimization without insturmentation is just mental masturbation"
I've never much been a fan of that argument either, because it's often
used to justify incredibly sloppy / inefficient code. The quote as it
stands is simply imprecise. There are /some/ optimizations which are
questionable enough that you very much want insturmentation before you
write large chunks of code, but the world just isn't black and white.
Put another way: I would like to think that any reasonably talented
systems programmer would know that polling files several times a second
for something like menu entries, or assembling entire HTTP queries and
responses several times a second to communicate with a system tray icon,
is a bad idea -- something that could be optimized. No need for
insturmentation at all.
These arguments (the least common denominator and the no optimization
without insturmentation) really irritate me, because I started on a 386
and many common operations take more wall-clock time today than they did
back then. I'm now on a Pentium 4, for chrissakes, with a gigabyte of DDR
RAM. What has happened is that as the generations go on, some of us seem
to be trading in programmer time for CPU time (read: being lazy).
It seems like a perfectly acceptable bargain, and on some level it is. (I
don't think any sane person expects you to write desktop apps in
assembler, even though if you somehow had the dedication and
concentration required you'd make something at least slightly faster).
The problem is that programmers are _being lazy_ and choosing points on
the "diminishing returns" curve that are well before returns start to
I'm sure not all of Dave's identified misbehaviors were even apparent to
the programmers in question. Many of them are probably 'bugs'. But when I
hear about applications hammering the filesystem many times per second,
or using HTTP as an IPC mechanism between a system tray icon and another
program, I worry that we've all gone just a little bit crazy.
So I propose a new quote:
"Sensible optimizations give pleasure by default"
to post comments)