User: Password:
|
|
Subscribe / Log in / New account

Rethinking optimization for size

Rethinking optimization for size

Posted Feb 4, 2013 23:29 UTC (Mon) by rgmoore (✭ supporter ✭, #75)
In reply to: Rethinking optimization for size by ssam
Parent article: Rethinking optimization for size

It seems like PGO is something that would only have to be run once in a while. The hot code paths are likely to stay hot unless you substantially rewrite your program. So you'd only need to run the profiling occasionally, then use that to optimize your choice of compiler flags, which could be written into the makefile. Everyone downstream could benefit from the improvements without having to do the profiling themselves.


(Log in to post comments)

Rethinking optimization for size

Posted Feb 5, 2013 19:10 UTC (Tue) by khim (subscriber, #9252) [Link]

PGO does not work this way. It does not change set of flags - it's totally orthogonal optimization.

Basically a lot of optimizations are tradeoffs (perhaps most): "if we unroll this loop and it's hot then we win because it'll be faster, but if it's cold then we lose because we increase memory pressure... and we can unroll it twice or four time or even hundred of times... what to do, what to do". Without PGO there are some heuristics ("if branch will probably be taken more often then else branch", etc), but with PGO you know if the given piece of code is hot or cold. And this makes all the same optimizations perform better.

That's why you can not reuse results of PGO runs: you need the exact some code compiled twice. Most changes will invalidate the results (tiny changes in code may mean significant changes in the parsed tree - especially in C++). Yes, you know that hot codepath is still somewhere in this function, but where exactly? That's the question.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds