|
|
Subscribe / Log in / New account

Well, this settles it for me

Well, this settles it for me

Posted Sep 7, 2009 6:47 UTC (Mon) by bvdm (guest, #42755)
Parent article: BFS vs. mainline scheduler benchmarks and measurements

Was it unreasonable for Ingo to respond? I think not. Con's announcement was widely reported and raised many questions. If something is said on the public square, surely anyone with an interest should be welcome to respond?

Did Ingo respond in an unreasonable way? No, he his email was nothing but courteous, though written by someone who is evidently confident of his case.

Are Ingo's benchmarks unreasonable? No, only a fool would consider well-chosen benchmarks as completely worthless. Ingo did not attack BFS's use cases, only made a case for the mainline ones.

The bottom line is that Ingo made it clear that his concern is the mainline scheduler. He could have picked arbitrary benchmarks and run them on a netbook if he wanted to embarrass Con.

If you can't stand the heat, why go back into the kitchen?


to post comments

Well, this settles it for me

Posted Sep 7, 2009 7:22 UTC (Mon) by yoshi314 (guest, #36190) [Link] (22 responses)

i've been getting the impression recently that core kernel devs are totally
disconnected from the desktop world. ingo's choice of hardware is not too
odd, but still unrealistic for most desktop users (or at least where i
live).

benchmarks he did are relevant to more high-end machines.

con has a point, both about the choice of hardware and selection of
benchmarks. he thinks more like a desktop user.

but his attitude is awful. this might serve as a starting point for re-
enactment of scheduler flame-wars again.

Well, this settles it for me

Posted Sep 7, 2009 7:32 UTC (Mon) by bvdm (guest, #42755) [Link] (11 responses)

Firstly, Linux has a very small desktop presence, so not entirely optimizing for the desktop is a rational design decision, though "totally disconnected" is very hard to swallow.

Secondly, one hardly wants to re-implement something like a scheduler every year. Designing for the near and medium future creates stability. Anyway, you would be hard pressed to find a desktop machine without at least 2 cores these days.

As for netbooks, if interactivity (people keep posting about gaming FPS and high-def audio and high-res desktop experience) is such a concern, why are you using a netbook? A typical netbook has far fewer processes running.

I don't think there ever was a "war". It is a shame that so much unproductive drama was generated by someone who is evidently skillful at performing for the peanut gallery.

Well, this settles it for me

Posted Sep 7, 2009 7:44 UTC (Mon) by k8to (guest, #15413) [Link] (3 responses)

In conversations among engineers throughout silicon valley who have reason to push their code into or hack on Linux, the view of the LKML as unworkably hostile, short sighted, and unwilling to accept external ideas is now the norm. This is essentially a complete reversal from 10 years ago when people viewed it as relatively open and inviting.

Yes, this is subjective and I'm sure it has noise. I think it's also the truth.

Well, this settles it for me

Posted Sep 7, 2009 7:45 UTC (Mon) by k8to (guest, #15413) [Link]

Ie. I think this is the "totally disconnected" angle.

Well, this settles it for me

Posted Sep 7, 2009 8:03 UTC (Mon) by bvdm (guest, #42755) [Link] (1 responses)

You are a fortunate man to have the time and opportunity to traverse Silicon Valley so thoroughly :-p

But seriously, the only objective measures we have are the number of contributors and SLOC added, and both of these are still accelerating.

Now I would be astounded if the Linux kernel were the only technical project in the world without non-technical problems, but don't you think there are many other explanations for this change in perception? Such as, perhaps:

- That having your code included in the kernel has a much increased monetary benefit and is therefore more sought after
- That the existing kernel developers have increased in their experience and skill and that standards for acceptance are therefore higher today
- That the stature of being a core kernel developer has risen and that ego may be involved
- That many parts of the kernel is near-optimal or at least very mature and that it is sensible to value stability in those areas

And are the driver staging tree and desktop and security advances such as KMS and SMACK not countersigns to what you are suggesting?

Well, this settles it for me

Posted Sep 7, 2009 18:24 UTC (Mon) by k8to (guest, #15413) [Link]

It's not the money.

It's likely to be driven by standards, but the contention is that these standards are often more arbitrary than useful.

Ego on the part of the maintainers is certainly involved. Among my contacts, ego on the part of the author has certain not *risen* in the interim, although it may be high (I doubt it).

Stability has certainly become more prized.

Well, this settles it for me

Posted Sep 7, 2009 16:06 UTC (Mon) by einstein (guest, #2052) [Link] (1 responses)

> Firstly, Linux has a very small desktop presence,

I don't think that we desktop linux users are entirely happy with smug little comments like that. For us, linux is the only desktop presence and it looms large.

> so not entirely optimizing for the desktop is a rational design decision, though "totally disconnected" is very hard to swallow.

I think Con may have a valid point in questioning the one-size fits all paradigm. While it's an admirable goal to create a single kernel which runs optimally on everything from PDAs to supercomputing clusters, there may be too much of a divergence in performance profiles for that to be entirely practical.

As Linus has said, a desktop linux presence is vital to its viability, so optimizing desktop interactivity ought to be a very high priority.

Well, this settles it for me

Posted Sep 7, 2009 22:24 UTC (Mon) by mingo (guest, #31122) [Link]

As Linus has said, a desktop linux presence is vital to its viability, so optimizing desktop interactivity ought to be a very high priority.

It is. See for example this recent discussion on lkml. That discussion and those (non-trivial) patches were all about desktop latencies - and it's all part of v2.6.30 now.

Because I can

Posted Sep 7, 2009 22:20 UTC (Mon) by man_ls (guest, #15091) [Link] (4 responses)

As for netbooks, if interactivity (people keep posting about gaming FPS and high-def audio and high-res desktop experience) is such a concern, why are you using a netbook?
Because they are light and cute? You have not understood what "interactivity" means. People do not post about FPS or high-def audio per se, but about jitter, frame drops and audio skips. Those are really nasty when watching a movie or listening to music, and computers are said to be multitasking these days.

Because I can

Posted Sep 9, 2009 7:57 UTC (Wed) by gmaxwell (guest, #30048) [Link] (3 responses)

Jitter, frame drops, and audio skips are all *easily measurable*. Yet *none* of the advocacy of BFS that I've seen includes any measure of these things. Only vague hand-waving about smoothness. Perhaps these people should color the edges of there disks with green markers... I hear it reduces jitter.

Meanwhile I do audio processing with a ~2ms processing interval using the mainline scheduler, thrashing the system, high loads... and underruns are basically unheard of at least after tossing the drivers and hardware that I determined were misbehaving (with measurements, ... imagine that!)

I don't doubt that there are genuine areas for improvement, even in the scheduler but it isn't going to get better without real measurements and some social skills superior to those of Hans Reiser.

Interactive benchmarks

Posted Sep 9, 2009 20:14 UTC (Wed) by man_ls (guest, #15091) [Link] (2 responses)

You are right, there are no benchmarks that show that BFS is good at interactivity. However I contend that such "hand-waving" is to be expected from an anaesthetist and a crowd of enthusiasts (and is not a bad thing at all). The real pity is that on lkml, a list full of high-flying engineers, nobody has been able to construct those benchmarks or do those measurements either. The best we have is a scheduler hacker posting odd benchmarks on esoteric hardware. No offense for Ingo, he was very respectful and had interesting data, but it was all biased:
we tune the Linux scheduler for desktop and small-server workloads mostly [...] what i consider a sane range of systems to tune for - and should still fit into BFS's design bracket as well according to your description: it's a dual quad core system with hyperthreading
And then repeating the measures on a quad-core machine, the best he has offered so far. It seems that, despite having an expressed focus on the desktop, a netbook and a few days for testing on it are out of reach.

As to the benchmarks, the first test was how fast can he build the kernel using n processes. Well, this is only measuring thoughput; if each process is supposed to be interactive, it is not unreasonable to expect that they will more easily interrupted and thus the build will last longer. Then a very artificial pipe-messaging test, followed by similarly contrived benchmarks -- which CFS has already been tuned to. So the "other side" (lkml) has not been able to produce anything better either to show that CFS is good at interactivity, measuring skips and jitter, and I find this to be even more pitiful.

Interactive benchmarks

Posted Sep 9, 2009 23:40 UTC (Wed) by njs (subscriber, #40338) [Link] (1 responses)

> As to the benchmarks, the first test was how fast can he build the kernel using n processes.

To be fair, that benchmark is originally Con's, not Ingo's (Con's original announcement claims that "make -j4 on a quad core machine with BFS is faster than *any* choice of job numbers on CFS").

Interactive benchmarks

Posted Sep 10, 2009 9:52 UTC (Thu) by man_ls (guest, #15091) [Link]

More to the point: even when one side proposed invalid benchmarks, the other side was not able to come up with anything better. (And no, "beat them at their own benchmarks" is not a valid excuse; we are talking about engineering, not about marketing.)

Well, this settles it for me

Posted Sep 7, 2009 7:46 UTC (Mon) by andreashappe (subscriber, #4810) [Link] (9 responses)

Hi,

> i've been getting the impression recently that core kernel devs are totally
> disconnected from the desktop world. ingo's choice of hardware is not too
> odd, but still unrealistic for most desktop users (or at least where i
> live).

I've bought a new desktop rig three months ago and paid not unreasonable 1100 euro for a quad core (+hyper-threading) i7 processor backed up by 6gb ram.

I do not believe that Linux should target < 1000 Euro machines (at least not for mainline development). If there's use for another scheduler Con can keep it out-of-tree (as he seems to intend to). When distributions pick it up it might even get into mainline. But his childish behaviour after Ingo benchmarked his patch (with a workload that was well within the Con's use case description) does not bode well. Not well at all.

cheers, Andreas

Well, this settles it for me

Posted Sep 7, 2009 8:03 UTC (Mon) by Cato (guest, #7643) [Link] (4 responses)

So the whole focus on netbooks is a waste of time, then? The majority of laptops and desktops these days cost less than 1000 Euros/USD - in fact when building a new dual-core desktop system for casual web surfing I found it hard to spend more than 400 euros, and the resulting system is far faster than really needed. And then there's the whole embedded space of course, and all the people introduced to Linux by putting it on PCs that are too old to run a recent Windows version well, or by turning an old PC into a small server.

Well, this settles it for me

Posted Sep 7, 2009 8:11 UTC (Mon) by andreashappe (subscriber, #4810) [Link] (3 responses)

> So the whole focus on netbooks is a waste of time, then?

I was talking about _mainline_. Pray read the rest of my post (where I mentined out-of-tree patches). And AFAIK embedded systems often have out-of-tree patchsets for their architectures.

> The majority of laptops and desktops these days cost less than 1000 Euros/USD

If some scheduler thing would be added to the kernel this would take 2-3 release cycles (at least).. by which time multi-core systems are even more common than today.

cheers, Andreas

Well, this settles it for me

Posted Sep 7, 2009 15:45 UTC (Mon) by broonie (subscriber, #7078) [Link] (2 responses)

Embedded systems are using fewer and fewer non-mainline patches - essentially all the CPU vendors who don't have good mainline support are experiencing substantial pressure to sort that situation out sooner rather than later.

Well, this settles it for me

Posted Sep 7, 2009 16:01 UTC (Mon) by andreashappe (subscriber, #4810) [Link] (1 responses)

wouldn't the situation be the same with an out-of-tree scheduler? If it would reap benefits then pressure for inclusion would build up.

cheers, Andreas

Well, this settles it for me

Posted Sep 7, 2009 16:25 UTC (Mon) by broonie (subscriber, #7078) [Link]

Yes, though Con's disinterest in that might be an issue.

Well, this settles it for me

Posted Sep 7, 2009 8:30 UTC (Mon) by sitaram (guest, #5959) [Link]

You must be channeling Marie Antoinette... :-)

You will not believe the number of people in India who still use P4s (and God even P3s sometimes). Far more than the Core2Duo kind, I rather suspect. Maybe not in new purchases but in total numbers. We don't throw away stuff so fast anyway.

After reading your email I'm even more convinced that Ingo did not understand what Con was trying to say (*)

Sitaram

(*) ...or he did but didn't want to risk saying the sort of stuff you said ;-)

Well, this settles it for me

Posted Sep 7, 2009 9:13 UTC (Mon) by endecotp (guest, #36428) [Link] (2 responses)

> I do not believe that Linux should target < 1000 Euro machines

Maybe you're living on a different planet. The only time I've ever spent anything like that much was my first PC back in 1994 - a 66MHz 486.

Well, this settles it for me

Posted Sep 7, 2009 12:46 UTC (Mon) by pboddie (guest, #50784) [Link]

Maybe you're living on a different planet. The only time I've ever spent anything like that much was my first PC back in 1994 - a 66MHz 486.
Indeed. Although there can be good reasons for paying €1000 (or £1000) for a system, it's been a long time since anyone really had to. It reminds me of the "Killer PCs for £1500" idiocy the UK computing press used to run on the cover of their magazines every month back in the early-to-mid 1990s, and even at that time such dull retail summaries served the advertisers far more than they did the actual readership.

Well, this settles it for me

Posted Sep 7, 2009 16:07 UTC (Mon) by andreashappe (subscriber, #4810) [Link]

> Maybe you're living on a different planet.

Could be, I'm using it for coding and running statistics stuff mostly (while doing 'normal' video/music listening).

But that thing did cost me around 1000 euro four months ago and would be under that by now.. and will be fairly standard *before* a new scheduler would be added to mainline.

People that experiencing performance or latency problems on existing hardware might be better of if they would just *report* their problems to the lkml. Ingo is quite responsive to feedback.

(embedded usage differs.. but that is something that the market (tm) should be perfectly able to decide).

Well, this settles it for me

Posted Sep 7, 2009 11:24 UTC (Mon) by rsidd (subscriber, #2582) [Link] (4 responses)

If you can't stand the heat, why go back into the kitchen?

Con did not go back into the kitchen. He was explicitly avoiding LKML. Ingo tried to pull him in. And posting graphs as 6001x4201 JPG files shows extraordinary cluelessness. Every graphing program I've seen supports vector formats like EPS or PDF, and if he must use JPG, he can at least choose a size that fits on screen -- or does he use a 6000x4200 resolution monitor?

I'm running a Core 2 duo laptop with 4 GB RAM, and most of the time I don't suffer interactivity issues. But on lesser machines it is a big problem. If Ingo doesn't use such machines, he should be quiet. Con's problem was not "performance", it was interactivity, and Ingo's benchmarks are basically beside the point.

Jens Axboe posted other benchmarks that sound more reasonable as measures of interactivity (which is Con's concern, not "performance"), and he is not happy with CFS, but he was not able to boot the BFS kernel.

Well, this settles it for me

Posted Sep 7, 2009 11:38 UTC (Mon) by bvdm (guest, #42755) [Link] (1 responses)

Con made a very public re-entry and raised many questions. Ignor had every right to respond. And he did so in a calm and admirable manner.

Your comments about the image size ares just ad hominem which I will ignore.

Have you read Ignor's email carefully or at all? He is clearly making the case that, whatever BFS's advantages on lower end machines may be (which he chose not to contest), CFS is still better suited for the mainline.

No-one is arguing that CFS is perfect, but I have a grave concern that Con is *again* pissing in the drinking well with his style of doing things.

Well, this settles it for me

Posted Sep 7, 2009 22:35 UTC (Mon) by man_ls (guest, #15091) [Link]

Ignor had every right to respond. [...] Have you read Ignor's email carefully or at all?
No, it's pronounced "Aye gnor". (Sorry, couldn't resist after the second mention.)

Well, this settles it for me

Posted Sep 7, 2009 13:05 UTC (Mon) by aigarius (subscriber, #7329) [Link]

The images are 1024px wide now.

Well, this settles it for me

Posted Sep 9, 2009 11:27 UTC (Wed) by liljencrantz (guest, #28458) [Link]

Ingo has said that the graph size was a user error, apologized and replaced them. Calling him «extraordinary clueless» without knowing the facts is hostile and unmotivated. Mistakes happen.

I agree that Ingos choice of test machine and benchmarks is telling when it comes to what his priorities are - he gets paid to create software that runs well on big systems, 8 CPUs probably looks small to him. No malice or stupidity involved, just a different perspective.

I think the ball is firmly in the BFS camps court. Con won't and shouldn't deal with this, but any random BFS user with a bit of time could sit down and redo a set of benchmarks that _he_ feels is more relevant and use as a counterpoint. Maybe compiling vim on an Atom CPU as well as some measurements of dropped frames in mplayer while compiling? Latencies and stuttering may be hard to measure, but it is far from impossible. Something better than «it feels better when i shake my mouse» is needed.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds