User: Password:
Subscribe / Log in / New account

Sometimes performance is not the main factor

Sometimes performance is not the main factor

Posted Oct 21, 2009 15:47 UTC (Wed) by pj (subscriber, #4506)
In reply to: Sometimes performance is not the main factor by nevets
Parent article: KS2009: Performance regressions

Agreed, maintainability is pretty key. It would be nice, though, if we could teach the compiler about those kinds of micro-optimizations - then we could be both fast and maintainable!

(Log in to post comments)

Sometimes performance is not the main factor

Posted Oct 21, 2009 17:03 UTC (Wed) by felixrabe (guest, #50514) [Link]

How about this: have two chunks of source code, one non-optimized, other optimized. #if 0 out the non-optimized version.

Neat idea: put a SHA1 sum of the non-optimized code next to the (by hand) optimized one, and state (in a compiler-readable way) that the optimized version is equivalent to the code with that hash, and let the compiler check that the non-optimized, "commented-out" version still matches the hash - otherwise issue a warning and compile the non-optimized version instead.

Sometimes performance is not the main factor

Posted Oct 21, 2009 17:20 UTC (Wed) by nevets (subscriber, #11875) [Link]

And this makes things cleaner and maintainable how?

Note, it is also design decisions that may not by the best for performance. Some of the issues is with the compiler. We break large functions up to make it more readable. This creates hard issues about inlining functions or not.

You may think inlining a bunch of functions will help in performance, but then you may increase the size of the code and start taking more instruction cache misses, which cost more than a function call. Some archs handle function calls better than others.

Yes, if a design improves the code by 1 or 2 percent, that may be rational to go with the more complex design. But if the more complex design only saves you a quarter a percent, and it is much more likely to carry bugs (more complex code is always more buggy) then it is not worth it. But as the kernel grows, each of those 1/4 percent performance regression adds up.

With things like ftrace and perf now in the kernel, we can start looking deeper at problem areas, and hopefully redesign things in a maintainable way to get some of our performance back.

Sometimes performance is not the main factor

Posted Oct 22, 2009 0:17 UTC (Thu) by nix (subscriber, #2304) [Link]

That teaches it one peephole optimization, as a solid lump that can't be
split up or scheduled (as that would probably lose whatever property you
were trying to tell it). A likely loss.

(And, er, also, most optimizations can't be expressed usefully in source
code, and those that can are much too complicated to express by handing it
two hunks and saying 'this one is optimized'. The only property this could
usefully impart is trivial code motion optimizations, and those are
*transformations on graphs*, not a straight replacement of one lump of
source code with another.)

Now in some languages you *can* do something like this: start with
something non-optimized and prove to the compiler that it can transform it
into something optimized, and it can do that henceforward to all similar
constructs it encounters. But the 'something' is not going to be a lump of
C. Even Haskell's not really expressive enough for this sort of thing to
work, and doing it is *not* simple.

Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds