User: Password:
|
|
Subscribe / Log in / New account

whole-program optimisation the hard way

whole-program optimisation the hard way

Posted Mar 10, 2006 7:29 UTC (Fri) by massimiliano (subscriber, #3048)
In reply to: whole-program optimisation the hard way by nix
Parent article: Some patches of interest

...possibilities include an intermediate representation (in the form of a bytecoded language for a nonexistent virtual machine) which GCC can save, load several of, and optimize. Politics is involved here, though, and whatever's done it'll be a lot of work.

Yes, but the approach would be the right one IMHO.
For instance, in Mono the JIT lays out the compiled methods sequentially in memory, and since methods are compiled on demand, this naturally creates a "cache friendly" memory layout for the machine code, where methods close in the call tree are close in memory.
We have an AOT compiler, but it misses this (and other) optimization opportunities, and we can see it.

And having a CPU independent intermediate representation can solve a lot of other problems as well (and is the whole point of the existance of the ECMA standards implemented by Mono and MS .NET).

Now, of course this involves politics :-(


(Log in to post comments)

whole-program optimisation the hard way

Posted May 18, 2006 8:16 UTC (Thu) by job (guest, #670) [Link]

I think the poltics nix refers to is that if gcc writes down intermediate representations to disk you suddenly have a perfect opportunity to extend gcc with non-free software. One of the reasons gcc has been the most successful free compiler is that new architectures and optimizers has to go in via free code, if you had the ability to swap out the back end that would not have happened.

whole-program optimisation the hard way

Posted Jul 16, 2006 7:55 UTC (Sun) by jzbiciak (subscriber, #5246) [Link]

One easy way to address that is to generate the specific encoding when you build the compiler. That is, if the output of GCC really is the binary representation of GCC's internal representation, it will be *very* dependent on the specific version of GCC you're using.

Thus, it won't be a stable interface. Not even close.

I think it's perfectly acceptable to require a complete rebuild if the compiler version changes on you and you're trying to do whole-program optimization. Sure, it makes the feature harder to use, but it doesn't constrain it unnecessarily.

The fact of the matter is that unless the IR is merely the GIMPLE output of the front end, different versions of the compiler are going to have different things to say about the code within the IR it outputs. And if the IR is merely just the GIMPLE output from the front end, well, you've only saved parsing time across all your source files. Every build -fwhole-program runs everything else on the entire program. You only start saving build time noticeably if you output stuff from later stages of analysis.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds