Not logged in
Log in now
Create an account
Subscribe to LWN
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
Nautilus polls files related to the desktop menus every few seconds, rather than using the inotify API which was added for just this purpose.
Exactly when was the inotify API added? 2.6.x or 2.4.x?
Posted Jul 21, 2006 13:20 UTC (Fri) by corbet (editor, #1)
Posted Jul 21, 2006 13:44 UTC (Fri) by pizza (subscriber, #46)
Now what about the other platforms that Gnome has to run on? How long have they had inotify? Have they ever? Will it work the same way?
When one of your project goals is to be portable, you really do need to code to the least-common-denominator APIs. Special-case code paths add greatly to software complexity and make debugging more difficult.
Yes, userspace often does a lot of dumb things, but "not taking advantage of bleeding-edge kernel features" isn't usually one of them.
Posted Jul 21, 2006 13:53 UTC (Fri) by arjan (subscriber, #36785)
So there is quite reasonable infrastructure for this in gnome.. just it's not being used consistently
Posted Jul 21, 2006 17:02 UTC (Fri) by sepreece (subscriber, #19270)
Posted Jul 22, 2006 9:51 UTC (Sat) by drag (subscriber, #31333)
For instance with famd if I had mount point or something like that in my home directory then it would crap out if I tried to go more then 2 directories deep. And basicly cause anything to do with gnome that concernes files (nautilus mostly)
With gamin there is no problem.
I think that a huge part of the problem we have with performance on Linux desktop nowadays is that everybody was scrambling to get just the basics in place and everything more or less working.
Hal/Dbus/X.org/inotify(and it's userspace helpers)/desktop search stuff/udev.. etc etc. All of it is thrown together and made to 'make it just work'.
Now it seems that the push is going towards making 'make it work well'. Filling out the blanks, improving performance. That sort of thing.
Posted Jul 27, 2006 9:35 UTC (Thu) by nix (subscriber, #2304)
Apparently its inability to send notifications to other copies of itself over the network is a *feature*, but given that you're using NFS or a similar fs in any case, I can't imagine what extra security threats could be opened by sending notifications around. (FAM could do this.)
Posted Jul 21, 2006 15:18 UTC (Fri) by cventers (subscriber, #31465)
It's totally possible to build platform-independent code (hell, the
toolkits both of our desktops run on are portable to operating
systems /without/ UNIX APIs), yet specialize on each platform. Take the
kernel as a great example -- we have a nice mechanism called
"alternatives" that detects processor model and counts, and then
re-writes parts of the kernel text on the fly in order to make it
maximally efficient. The developers could have instead shot for the
lowest common denominator (386) -- cause the code would still certainly
work on everything else (provided that it's also built for SMP).
We depend on the huge mess of scripts known as "autom4te" so much these
days in order to make our buildsystems work, but when I watch all the
crap flying by on every package I build, I realize that few of them
actually /need/ all those damn checks. Why don't we make better use of
the tools we have? autom4te can check inotify. If it's present, don't
build a Gnome desktop that spams the kernel, CPU and memory bus every
second when there's no activity at all.
Posted Jul 21, 2006 16:04 UTC (Fri) by nix (subscriber, #2304)
However, this is perfectly doable.
Posted Jul 21, 2006 16:45 UTC (Fri) by cventers (subscriber, #31465)
But yes, just attempt inotify at startup. -ENOSYS? Ok, we'll try this
Posted Jul 21, 2006 18:31 UTC (Fri) by nix (subscriber, #2304)
Hm. Looking at the sources, gamin has had an inotify backend since v0.0.8, Aug 26 2004, *long* before inotify hit the kernel proper. It is enabled by default.
Looks like this might be an out-and-out bug. I'll have a look this weekend and see if I can reproduce and fix it.
Posted Jul 23, 2006 17:51 UTC (Sun) by NAR (subscriber, #1313)
Yes, but I'm afraid this is way above the avarage application programmer's level.
Posted Jul 23, 2006 17:55 UTC (Sun) by cventers (subscriber, #31465)
Posted Jul 21, 2006 16:08 UTC (Fri) by pizza (subscriber, #46)
* Software outlives hardware, by several orders of magnitude. You really weaken your argument by trying to draw parallels there -- especially when modern distros *still* build userland for a stock i386.
* "least common denominator" gives you the greatest coverage with the least effort. Additional effort should be focused on where it does the most good, and that call is (hopefully) made by those who know the bigger picture and/or do the actual work. (I'd agree that inotify support is a promising candidate, but I'm just an armchair general)
* Different APIs can require radically different software architecures; it's not a matter of "writing an autom4te test"; someone has to actually write a non-trivial pile of non-trivial code, while leaving the existing path intact as a run-time fallback and maintaining complete backwards compatibility (source, binary, and behaivoral) for the APIs that Gnome exports.
So while yes, the "least common denominator" argument sucks, it's not the suckiness of the argument itself, but rather the suckiness of the *reality* that the argument represents.
"Optimization without instrumentation is just mental masturbation"
Posted Jul 21, 2006 16:42 UTC (Fri) by cventers (subscriber, #31465)
The problem with the least common denominator argument isn't really the
suckiness of the reality that the argument represents, it's the fact that
it ever gets used as an excuse to write code in which "sub-optimal" is a
Furthermore, the fact that different systems require different code to be
optimal is a fact of life. It's why we have abstraction layers at all. If
every system was the same, operating systems either wouldn't exist or
they'd be a hell of a lot more simple, and that goes for everything from
the bottom of the stack up. It's very much a reality, as you put it.
When you choose to support multiple systems, you should be ready to write
multiple implementations of the same function. Writing to the least
common denominator -- and not ever specializing -- is a cop-out.
> "Optimization without insturmentation is just mental masturbation"
I've never much been a fan of that argument either, because it's often
used to justify incredibly sloppy / inefficient code. The quote as it
stands is simply imprecise. There are /some/ optimizations which are
questionable enough that you very much want insturmentation before you
write large chunks of code, but the world just isn't black and white.
Put another way: I would like to think that any reasonably talented
systems programmer would know that polling files several times a second
for something like menu entries, or assembling entire HTTP queries and
responses several times a second to communicate with a system tray icon,
is a bad idea -- something that could be optimized. No need for
insturmentation at all.
These arguments (the least common denominator and the no optimization
without insturmentation) really irritate me, because I started on a 386
and many common operations take more wall-clock time today than they did
back then. I'm now on a Pentium 4, for chrissakes, with a gigabyte of DDR
RAM. What has happened is that as the generations go on, some of us seem
to be trading in programmer time for CPU time (read: being lazy).
It seems like a perfectly acceptable bargain, and on some level it is. (I
don't think any sane person expects you to write desktop apps in
assembler, even though if you somehow had the dedication and
concentration required you'd make something at least slightly faster).
The problem is that programmers are _being lazy_ and choosing points on
the "diminishing returns" curve that are well before returns start to
I'm sure not all of Dave's identified misbehaviors were even apparent to
the programmers in question. Many of them are probably 'bugs'. But when I
hear about applications hammering the filesystem many times per second,
or using HTTP as an IPC mechanism between a system tray icon and another
program, I worry that we've all gone just a little bit crazy.
So I propose a new quote:
"Sensible optimizations give pleasure by default"
Posted Jul 21, 2006 19:18 UTC (Fri) by pizza (subscriber, #46)
Here's the bottom line -- we're not all "above average" programmers. Even when we know what "the right way" is, we usually don't have that luxury due to externally-imposed constraints.
"Cheap, fast, good. Pick two"
Posted Jul 21, 2006 20:14 UTC (Fri) by cventers (subscriber, #31465)
Really? I'm not sure I see how. It seems to me like you were listing
counterpoints to my complaint about programming to the least common
denominator, and I was systematically addressing them (including your
quote about optimization)
> Here's the bottom line -- we're not all "above average" programmers.
> Even when we know what "the right way" is, we usually don't have that
> luxury due to externally-imposed constraints.
What does "average" have to do with it? It doesn't take oodles of talent
to build a model capable of using different implementations. Sometimes,
it's even more trouble to try and come up with something generic!
You allude to constraints but never mention what some of them might be.
> "Cheap, fast, good. Pick two"
Why pick just two? One of the greatest things about free software
development is that it's usually not the requirements-driven,
development uncomfortably familiar to programmers working in the
corporate world. And if our projects are being run that way (which I
don't think they are), we should move further up the chain and ask why
we're adopting policies and procedures that impose external constraints
on our code quality.
This stuff isn't actually all that complicated. The problem is either
*A) No one had pointed out ways in which apps misbehave, so no one knew
there was a problem (glad we have this paper to enumerate some examples!)
*B) Developers did what they thought was 'good enough' and just didn't
realize that their implementation didn't make their expectation
*C) We're less than average programmers and we can't figure this stuff
out for the life of us (doubt that, there's oodles of awesome free
software from all of the major projects out there, which demonstrates
So I think Dave's paper was spot-on. We should skip the 'apologizing'
step and move on to 'making it better'.
Posted Jul 23, 2006 13:08 UTC (Sun) by pizza (subscriber, #46)
If you want your software to be developed "good and fast", then it's not going to be cheap. If you want it "fast and cheap" then it's not going to be all that good. If you want it "good and cheap" then it won't happen particularly quicky.
"fast and cheap" is usually where software ends up when someone is directly footing the bill (and hence, there is an upper bound on cost, aka budgets/deadlines, and "good" tends to suffer). "Good and cheap" is where F/OSS software traditionally lies, where the "it'll be done when it's done" attitude is the norm. Then we end up with the likes of NASA (or other life-critical situtations), where the requirement of "good" is so important that it happens neither quicly nor cheaply.
The problem with the above generalization is that many larger F/OSS projects (including Gnome) actually fall into the first category, as the majority of the "work" is done by people required to do so, with formal goals, deadlines and budgets. F/OSS has gone up and been corporatized!
(Another glaring hole in this generalization is that "good" means different things to everyone -- In the end, only the one who is footing the bill gets to make that call -- but that is the nature of generalizations..)
And finally, I would agree with you and chalk up the problems that Dave raised to (A) and (B), although they both are symptoms of (C) -- which is usually due to inexperience, not idiocy. Subsequently, with better awareness of (A) and (B), (C) is lessened as the programmer presumably will learn from their mistakes.
Dave's ("spot on", as you put it) paper was a direct result of the idea embodied by the "no premature optimization" blurb that you took so much of an issue with. Without that instrumentation, this handful of bugs/mistakes wouldn't have likely come to light, and we wouldn't have been able to learn from them.
Posted Jul 23, 2006 17:10 UTC (Sun) by cventers (subscriber, #31465)
You could twist the definition of fast, cheap, and good enough to make
the "Pick two" argument apply to any project. The problem I have
with "Pick two" and the earlier optimization quote is simply that most of
the time I've heard an engineer saying one, it's being invoked as an
excuse for shoddy design. And I've personally witnessed that when you
simply let a passion for your art drive your work, and sprinkle on a
little bit of experience in the environment you're working in, you can
deliver "fast, cheap, good" all at once.
F/OSS is getting more and more industrialized, but depending on the
project, the majority of the code still comes from people with that
passion -- people just scratching their itch. I hope our projects don't
erode into the same corporately-managed disasters as are so commonplace to
the proprietary software engineer. But since engineers have the power in
F/OSS, I think if we focus on passion and rejecting ideas like "fast,
cheap, good -- pick two," we'll be entirely successful in breaking the
traditional rules of development once again.
This is free software. The traditional rules of corporate development
don't apply; please leave them at the door.
Posted Jul 23, 2006 17:55 UTC (Sun) by NAR (subscriber, #1313)
I wouldn't call the 2.6 process "fast, cheap and good". It might be fast, but it's certainly not good (the last usable kernel for me was 2.6.14) and definitely not cheap - I'd like to know how many kernel developers are funded for their work on the kernel. I think it's not a particulary low number.
Posted Jul 23, 2006 18:00 UTC (Sun) by cventers (subscriber, #31465)
It's unfortunate that you've had problems since 2.6.14. What sort of
problems are you having?
After having seen the survey conducted here on kernel quality, it would
seem like most users are pleased (I'm one of them).
Posted Jul 24, 2006 6:42 UTC (Mon) by drag (subscriber, #31333)
Lower latencies, more usable desktop. Better responsiveness. My hardware is supported out of the box on new kernels, which is wasn't for older. ALSA sound drivers are a huge improvement over OSS for me. With dmix I can have, get this, more the _one_sound_ at a time and it doesn't sound like crap. Multimedia performance has improved.
(of course I am still taking about the kernel here.. it's desktop scedualing options makes life better)
Stability has improved. Wireless support has improved. Udev makes things easier for me now that I just tell the computer what /dev files I want vs having to dig around and finding the stupid major minor numbers for everything.
Maybe if the other person was to post WHY 2.6.15, 2.6.16, 2.6.17 series kernels are unusable maybe they would have receive more sympathy.
Posted Jul 24, 2006 12:27 UTC (Mon) by NAR (subscriber, #1313)
Your mileage may vary, but I never managed to boot my old 486 with 2.6 kernel - fortunately it worked with 2.4. It didn't worked well, the TCP connection tracking code kept tracking connections that were long gone, so the system ran out of memory, but it still worked. On the other hand, one of the two reasons I use 2.6 on my other computer is that with 2.6 I dont' have to reboot between watching a DVD and burning a CD-R.
Stability has improved. Wireless support has improved. Udev makes things easier for me
Again your mileage may vary, but my computer locks up hard with every single 2.6 if I make a larger I/O operation while watching TV with xawtv - and this wouldn't make a useful bug report. I don't have wireless cards and never felt the need for dynamic /dev, so these features do not make me happy.
WHY 2.6.15, 2.6.16, 2.6.17 series kernels are unusable
Recording audio from TV doesn't work with mplayer. I've reported the bug and it's supposed to be in mplayer and supposed to be fixed, yet it still didn't work when I tried last time. So I stick with 2.6.14.
Posted Jul 21, 2006 17:42 UTC (Fri) by vonbrand (subscriber, #4458)
The userland is normally compiled for i386 instructions only, but scheduled (instruction selection and ordering) for i686. The code where full i686 (or whatever) does make a real difference is far in between (and there you do get i686 packages).
Distributions (and their users!) do pay a hefty price if there are zillions of package versions by CPU type.
Posted Jul 22, 2006 6:07 UTC (Sat) by dvdeug (subscriber, #10998)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds