inotify
inotify
Posted Jul 21, 2006 16:42 UTC (Fri) by cventers (guest, #31465)In reply to: inotify by pizza
Parent article: OLS: On how user space sucks
There is a difference between a vendor choosing to make i386 releases and
programmers refusing to use the features of any more modern chip simply
because a few i386 boxes are still out there clocking their ops. One of
the great things about having open source code is that you can download
and build your own packages optimized just how you choose (indeed,
distributions like Gentoo even make it easy). You're doing well if your
code will build for old hardware but otherwise make use of new features.
The problem with the least common denominator argument isn't really the
suckiness of the reality that the argument represents, it's the fact that
it ever gets used as an excuse to write code in which "sub-optimal" is a
gross understatement.
Furthermore, the fact that different systems require different code to be
optimal is a fact of life. It's why we have abstraction layers at all. If
every system was the same, operating systems either wouldn't exist or
they'd be a hell of a lot more simple, and that goes for everything from
the bottom of the stack up. It's very much a reality, as you put it.
When you choose to support multiple systems, you should be ready to write
multiple implementations of the same function. Writing to the least
common denominator -- and not ever specializing -- is a cop-out.
> "Optimization without insturmentation is just mental masturbation"
I've never much been a fan of that argument either, because it's often
used to justify incredibly sloppy / inefficient code. The quote as it
stands is simply imprecise. There are /some/ optimizations which are
questionable enough that you very much want insturmentation before you
write large chunks of code, but the world just isn't black and white.
Put another way: I would like to think that any reasonably talented
systems programmer would know that polling files several times a second
for something like menu entries, or assembling entire HTTP queries and
responses several times a second to communicate with a system tray icon,
is a bad idea -- something that could be optimized. No need for
insturmentation at all.
These arguments (the least common denominator and the no optimization
without insturmentation) really irritate me, because I started on a 386
and many common operations take more wall-clock time today than they did
back then. I'm now on a Pentium 4, for chrissakes, with a gigabyte of DDR
RAM. What has happened is that as the generations go on, some of us seem
to be trading in programmer time for CPU time (read: being lazy).
It seems like a perfectly acceptable bargain, and on some level it is. (I
don't think any sane person expects you to write desktop apps in
assembler, even though if you somehow had the dedication and
concentration required you'd make something at least slightly faster).
The problem is that programmers are _being lazy_ and choosing points on
the "diminishing returns" curve that are well before returns start to
diminish.
I'm sure not all of Dave's identified misbehaviors were even apparent to
the programmers in question. Many of them are probably 'bugs'. But when I
hear about applications hammering the filesystem many times per second,
or using HTTP as an IPC mechanism between a system tray icon and another
program, I worry that we've all gone just a little bit crazy.
So I propose a new quote:
"Sensible optimizations give pleasure by default"
Posted Jul 21, 2006 19:18 UTC (Fri)
by pizza (subscriber, #46)
[Link] (7 responses)
Here's the bottom line -- we're not all "above average" programmers. Even when we know what "the right way" is, we usually don't have that luxury due to externally-imposed constraints.
"Cheap, fast, good. Pick two"
Posted Jul 21, 2006 20:14 UTC (Fri)
by cventers (guest, #31465)
[Link] (6 responses)
Really? I'm not sure I see how. It seems to me like you were listing
> Here's the bottom line -- we're not all "above average" programmers.
What does "average" have to do with it? It doesn't take oodles of talent
You allude to constraints but never mention what some of them might be.
> "Cheap, fast, good. Pick two"
Why pick just two? One of the greatest things about free software
This stuff isn't actually all that complicated. The problem is either
*A) No one had pointed out ways in which apps misbehave, so no one knew
So I think Dave's paper was spot-on. We should skip the 'apologizing'
Posted Jul 23, 2006 13:08 UTC (Sun)
by pizza (subscriber, #46)
[Link] (5 responses)
If you want your software to be developed "good and fast", then it's not going to be cheap. If you want it "fast and cheap" then it's not going to be all that good. If you want it "good and cheap" then it won't happen particularly quicky.
"fast and cheap" is usually where software ends up when someone is directly footing the bill (and hence, there is an upper bound on cost, aka budgets/deadlines, and "good" tends to suffer). "Good and cheap" is where F/OSS software traditionally lies, where the "it'll be done when it's done" attitude is the norm. Then we end up with the likes of NASA (or other life-critical situtations), where the requirement of "good" is so important that it happens neither quicly nor cheaply.
The problem with the above generalization is that many larger F/OSS projects (including Gnome) actually fall into the first category, as the majority of the "work" is done by people required to do so, with formal goals, deadlines and budgets. F/OSS has gone up and been corporatized!
(Another glaring hole in this generalization is that "good" means different things to everyone -- In the end, only the one who is footing the bill gets to make that call -- but that is the nature of generalizations..)
Dave's ("spot on", as you put it) paper was a direct result of the idea embodied by the "no premature optimization" blurb that you took so much of an issue with. Without that instrumentation, this handful of bugs/mistakes wouldn't have likely come to light, and we wouldn't have been able to learn from them.
Posted Jul 23, 2006 17:10 UTC (Sun)
by cventers (guest, #31465)
[Link] (4 responses)
You could twist the definition of fast, cheap, and good enough to make
F/OSS is getting more and more industrialized, but depending on the
This is free software. The traditional rules of corporate development
Posted Jul 23, 2006 17:55 UTC (Sun)
by NAR (subscriber, #1313)
[Link] (3 responses)
I wouldn't call the 2.6 process "fast, cheap and good". It might be fast, but it's certainly not good (the last usable kernel for me was 2.6.14) and definitely not cheap - I'd like to know how many kernel developers are funded for their work on the kernel. I think it's not a particulary low number.
Posted Jul 23, 2006 18:00 UTC (Sun)
by cventers (guest, #31465)
[Link] (2 responses)
It's unfortunate that you've had problems since 2.6.14. What sort of
After having seen the survey conducted here on kernel quality, it would
Posted Jul 24, 2006 6:42 UTC (Mon)
by drag (guest, #31333)
[Link] (1 responses)
Lower latencies, more usable desktop. Better responsiveness. My hardware is supported out of the box on new kernels, which is wasn't for older. ALSA sound drivers are a huge improvement over OSS for me. With dmix I can have, get this, more the _one_sound_ at a time and it doesn't sound like crap. Multimedia performance has improved.
(of course I am still taking about the kernel here.. it's desktop scedualing options makes life better)
Stability has improved. Wireless support has improved. Udev makes things easier for me now that I just tell the computer what /dev files I want vs having to dig around and finding the stupid major minor numbers for everything.
Maybe if the other person was to post WHY 2.6.15, 2.6.16, 2.6.17 series kernels are unusable maybe they would have receive more sympathy.
Posted Jul 24, 2006 12:27 UTC (Mon)
by NAR (subscriber, #1313)
[Link]
Your mileage may vary, but I never managed to boot my old 486 with 2.6 kernel - fortunately it worked with 2.4. It didn't worked well, the TCP connection tracking code kept tracking connections that were long gone, so the system ran out of memory, but it still worked. On the other hand, one of the two reasons I use 2.6 on my other computer is that with 2.6 I dont' have to reboot between watching a DVD and burning a CD-R.
Stability has improved. Wireless support has improved. Udev makes things easier for me
Again your mileage may vary, but my computer locks up hard with every single 2.6 if I make a larger I/O operation while watching TV with xawtv - and this wouldn't make a useful bug report. I don't have wireless cards and never felt the need for dynamic /dev, so these features do not make me happy.
WHY 2.6.15, 2.6.16, 2.6.17 series kernels are unusable
Recording audio from TV doesn't work with mplayer. I've reported the bug and it's supposed to be in mplayer and supposed to be fixed, yet it still didn't work when I tried last time. So I stick with 2.6.14.
Most of your response is tangental to the argument I submitted.inotify
> Most of your response is tangental to the argument I submitted.inotify
counterpoints to my complaint about programming to the least common
denominator, and I was systematically addressing them (including your
quote about optimization)
> Even when we know what "the right way" is, we usually don't have that
> luxury due to externally-imposed constraints.
to build a model capable of using different implementations. Sometimes,
it's even more trouble to try and come up with something generic!
development is that it's usually not the requirements-driven,
oh-my-the-deadline-is-yesterday-and-the-customer-is-complaining-style
development uncomfortably familiar to programmers working in the
corporate world. And if our projects are being run that way (which I
don't think they are), we should move further up the chain and ask why
we're adopting policies and procedures that impose external constraints
on our code quality.
there was a problem (glad we have this paper to enumerate some examples!)
*B) Developers did what they thought was 'good enough' and just didn't
realize that their implementation didn't make their expectation
*C) We're less than average programmers and we can't figure this stuff
out for the life of us (doubt that, there's oodles of awesome free
software from all of the major projects out there, which demonstrates
competency)
step and move on to 'making it better'.
"Fast, cheap, good. Pick two" is a reflection of the reality that nothing is without cost.inotify
And finally, I would agree with you and chalk up the problems that Dave raised to (A) and (B), although they both are symptoms of (C) -- which is usually due to inexperience, not idiocy. Subsequently, with better awareness of (A) and (B), (C) is lessened as the programmer presumably will learn from their mistakes.
Most of what you say about "Fast, cheap, good. Pick two" is fine and good. inotify
But all I'm really trying to say is that we, the F/OSS community, have the
capacity to do better. Look at the Linux 2.6 process - I would call
that "Fast, cheap, good". It's not perfect, but it's damn fast, it's still
F/OSS and it's still /very/ good.
the "Pick two" argument apply to any project. The problem I have
with "Pick two" and the earlier optimization quote is simply that most of
the time I've heard an engineer saying one, it's being invoked as an
excuse for shoddy design. And I've personally witnessed that when you
simply let a passion for your art drive your work, and sprinkle on a
little bit of experience in the environment you're working in, you can
deliver "fast, cheap, good" all at once.
project, the majority of the code still comes from people with that
passion -- people just scratching their itch. I hope our projects don't
erode into the same corporately-managed disasters as are so commonplace to
the proprietary software engineer. But since engineers have the power in
F/OSS, I think if we focus on passion and rejecting ideas like "fast,
cheap, good -- pick two," we'll be entirely successful in breaking the
traditional rules of development once again.
don't apply; please leave them at the door.
Look at the Linux 2.6 process - I would call that "Fast, cheap, good". It's not perfect, but it's damn fast, it's still F/OSS and it's still /very/ good.
inotify
If 'cheap' is a function including n (the rate of change) rather than a inotify
constant, then I think the kernel is about as 'cheap' as you can get.
problems are you having?
seem like most users are pleased (I'm one of them).
Each kernel gets better for me. 2.4 series was better then 2.2. 2.6 is better for me then 2.4inotify
My hardware is supported out of the box on new kernels, which is wasn't for older.
inotify
