Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 23, 2013
An "enum" for Python 3
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
However, you can get an estimate of when you're going to hit those thresholds with some fairly simple arithmatic. It's a shame more people don't try.
A niggle about linearity
Posted May 13, 2011 2:59 UTC (Fri) by raven667 (subscriber, #5198)
I've struggled with this kind of issue in the past when trying to understand the performance issues and needs of a large in-house that I supported for many years. I got it wrong many times and the simple estimations that might have helped only look simple in retrospect. There is a lot of pressure to treat databases and storage as a black box until you ask more from it than it can give.
Posted May 13, 2011 13:13 UTC (Fri) by andrewt (subscriber, #5703)
Posted May 13, 2011 20:05 UTC (Fri) by dlang (✭ supporter ✭, #313)
in many cases they are not where you expect them to be (the hyperthreaded cpu utilisation is one example, locking overhead with multiple processors is another)
frequently there are factors in play that you don't know about, and the result is that until you test it to a particular load, you have no way of knowing if the system will reach that load.
interpolation (guessing how things work between measured points) is fairly reliable
extrapolation (guessing how things will work beyond measured points) is only reliable until some new factor shows up.
Posted May 22, 2011 22:29 UTC (Sun) by rodgerd (guest, #58896)
A: Because when I run the new query you put in from a simple script, it uses a whole CPU and takes a second. Your performance limit is $NUMCPU/second, at best.
Q: [goes away to redo query]
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds