|
|
Subscribe / Log in / New account

Is there a single performance index that's useful for engineers?

Is there a single performance index that's useful for engineers?

Posted Sep 19, 2017 10:54 UTC (Tue) by k3ninho (subscriber, #50375)
Parent article: Testing kernels

> There was some thought that it would be nice to have a benchmark that boiled down to a single number that could be compared between systems (like the idea behind BogoMips). There was also a fair amount of skepticism about how possible that might turn out to be, however.

I'd like to know how this was pitched, and how much expertise the people stating it had in measuring and evaluating performance.

If 'little experience', then we can apply Dunning-Kruger, say that 'how hard can it be?' always gets 'much harder than you can imagine' and we can ignore it.

If it's something meaningful from people who want a number, I'd love to learn how you can put a single number on performance that doesn't need further qualification: imagine you have a suite of tests and aggregate the scoring, but have two systems which aggregate to the the same final tally, one with stunning performance in a sole category of work and terrible scores elsewhere versus one with middling scores across the board. Just like there's an apples-to-orange comparison hidden by that final tally, there's engineers who want different sorts of performance from their hardware -- one will want a high score on that particular sole category of work in her datacentre, while another will want good middle-tier performers.

Alternatively, consider the situation where you need to feed in fine-grained information to the scheduler about cache topology, time to refill caches and number of CPU clock cycles and instructions not completed when a cache is missed, plus what the balance of moving data to transforming data is for the work at hand. The model of a CPU's layout, and the board it's running in need to play a part in quantifying a system's performance, along with the information about the profile of the task of the test.

These those things suggest that a single index for performance will need to be broken down into components by anyone needing to use it for serious performance assessment; anyone wanting bragging rights will be happy with a single score.

K3n.


to post comments

Is there a single performance index that's useful for engineers?

Posted Sep 19, 2017 19:56 UTC (Tue) by james (subscriber, #1325) [Link] (1 responses)

Given the context (detecting performance regressions) they might not actually be concerned about comparing different systems. They just want to know if the number goes down on any one system.

Is there a single performance index that's useful for engineers?

Posted Sep 21, 2017 9:14 UTC (Thu) by k3ninho (subscriber, #50375) [Link]

>Given the context (detecting performance regressions) they might not actually be concerned about comparing different systems.
I'd read it as being a new paragraph which started a wholly new idea in a section about 'Benchmarks and Fuzzing', losing the context of the previous paragraph's words about preventing regressions and being followed by a sentence that could apply to either regression-prevention or to an abstract kernel performance number. Thanks for making this more clear.

K3n.

Is there a single performance index that's useful for engineers?

Posted Sep 20, 2017 3:08 UTC (Wed) by nevets (subscriber, #11875) [Link] (2 responses)

Being the one that actually suggested this, I'll explain the thought behind it.

This was not about seeing what change has the bigger stick, but more of a focus on regressions. For example, I use hackbench to see how my changes affect the system. hackbench is really a hack. I don't take it too seriously, but when I screw something up, it tends to show that I screwed something up quite quickly. Having some kind of benchmark that is not for true comparisons between competing features, but more to see if some code was added that caused a major performance regression.

The kernel selftests are there to test if your code breaks something in the kernel. But we have nothing to show that we caused a performance regression. Reading benchmark reports, as you state, takes skill. The idea is to have many people running two kernels on the exact same setup and compare the numbers to see if they changed drastically or not. If it only changed within an acceptable standard deviation, then there should be nothing to worry about. But if you see a large spike in a latency, or time to complete a micro-benchmark, then perhaps it should be reported and analyzed to look further if there is indeed a problem.

This is why I compared it to BogoMips. Those are truly bogus, but I did get better numbers when running on better hardware. And there were times that I screwed up something, and those BogoMips showed that there was a screw up somewhere.

I do agree with your worry. There will be those that take a single number and complain "Hey this change made this number go up", with no clue that it made a huge difference someplace else that counters it. The main point is to find issues. A complaint like this may be annoying, but can be blown off with an explanation to why it happened. But it would be nice to have something to look at, see that it changes drastically, and perhaps it pointed out that a change had an effect someplace you did not intend it would.

Is there a single performance index that's useful for engineers?

Posted Sep 21, 2017 8:10 UTC (Thu) by ColinIanKing (guest, #57499) [Link]

One tool that I use is stress-ng, it can exercise various components of the kernel and also has a bogo-ops throughput metric that can be helpful to detect performance regressions. See http://kernel.ubuntu.com/~cking/stress-ng/

Is there a single performance index that's useful for engineers?

Posted Sep 21, 2017 9:43 UTC (Thu) by k3ninho (subscriber, #50375) [Link]

Thank you for that explanation.

K3n.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds