|
|
Subscribe / Log in / New account

RCU, cond_resched(), and performance regressions

RCU, cond_resched(), and performance regressions

Posted Jun 26, 2014 23:46 UTC (Thu) by gerdesj (subscriber, #5446)
In reply to: RCU, cond_resched(), and performance regressions by PaulMcKenney
Parent article: RCU, cond_resched(), and performance regressions

Benchmarks are great but they must have some practicable application to reality. I will grant that a synthetic exercise that concentrates attention on peculiarities might be a good tool but could be the tail wagging the dog.

What kind of workload would: "... opens and closes a lot of files while doing little else ..."?

Cheers
Jon


to post comments

RCU, cond_resched(), and performance regressions

Posted Jun 27, 2014 2:50 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

> What kind of workload would: "... opens and closes a lot of files while doing little else ..."?

Virus scanner (though this probably reads quite a bit too). File indexer (small, targeted reads for things like ID3 and EXIF tags). Emacs (I kid, I kid). Nothing else comes to mind at the moment.

RCU, cond_resched(), and performance regressions

Posted Jun 27, 2014 16:51 UTC (Fri) by hansendc (subscriber, #7363) [Link] (3 responses)

Jon,

Any real-world workload is a mix of the things we measure in a microbenchmark like this. The microbenchmark just breaks the workload down in to constituent pieces so that the pieces can be measured more easily.

Almost any Linux system does lots of opens and closes. On my system, one instance of 'top' can do thousands of them a second. Everyone should care about how fast these kinds of very common operations are, even if they can't measure the overhead when they get slower by a small amount.

RCU, cond_resched(), and performance regressions

Posted Jun 27, 2014 18:01 UTC (Fri) by PaulMcKenney (✭ supporter ✭, #9624) [Link] (2 responses)

And one reason for doing this is that there might be a series of small changes, each of which provides (say) either a 0.5% improvement or a 0.5% degradation. Measuring these changes one at a time against a more realistic heavyweight application-based benchmark might show no measurable change for each, and taken together, their overall effect might be to cancel each other, thus providing no measurable change in performance.

In contrast, if you have a small tight benchmark, you might be able to sort the changes that improve performance from those that degrade performance. Of course, you should follow up by measuring the collection of changes that improved performance on a more realistic benchmark. After all, sometimes small changes interact in surprising ways.

RCU, cond_resched(), and performance regressions

Posted Jun 27, 2014 19:42 UTC (Fri) by dlang (guest, #313) [Link] (1 responses)

While everything you say is correct, it's also important to keep in mind that microbenchmarks tend to stress the system in ways that are different than normal use.

In this case, it's not that the individual opens and closes get slower, it's that when there are too many of them happening at once they end up getting slower.

so while a microbenchmark may show a 3% penalty, it's very possible that a real-world task that did 1/10 as many opens/closes (because it's doing real work in between) would not see a 0.3% penalty, but no measurable penalty

RCU, cond_resched(), and performance regressions

Posted Jun 27, 2014 19:54 UTC (Fri) by PaulMcKenney (✭ supporter ✭, #9624) [Link]

Agreed. And the points you make are exactly why I said that you should follow up microbenchmark tests with tests on a more realistic benchmark.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds