User: Password:
|
|
Subscribe / Log in / New account

Pauses

Pauses

Posted Jul 14, 2010 14:23 UTC (Wed) by i3839 (guest, #31386)
In reply to: Pauses by nix
Parent article: The Managed Runtime Initiative

-flto enables link time optimisation, which is IMHO something slightly
different, and earlier (current?) implementation has a reputation of being
slow. I was thinking more about -combine -fwhole-program. As far as I know
link time optimisation is about doing further optimisations before the real
linking.

-fmem-report flag is indeed interesting. On my current project, which
is about 4K lines of C code, it reports 16MB allocated with combine +
whole-program, 12MB when using dietlibc, and max 3MB when compiling
files one at a time.

So assuming C++ is ten times worse, and the code ten times bigger, then
you're indeed easily using gigabytes of memory. I guess you don't want
to compile big C++ programs at once though, doing it per file should be
fine.

> That's exactly the class of allocations for which obstacks are good and
> GC can often be forgone. When you have long-lived allocations in a
> complex webwork in elaborate interconnected graphs, then is when GC
> becomes essential. And GCC has elaborate interconnected graphs up the
> wazoo. The quality of internal APIs is mostly irrelevant here: it's the
> nature of the data structures that matters.

With crappy APIs/design you can't allocate objects on the stack, but are
forced to allocate them dynamically even if they're short lived.

The problem of elaborate interconnected graphs is that it's hard to end
up with nodes with no references at all, so GC usually won't help much.
And even if it does it probably doesn't reduce peak memory usage. So yes,
in such cases you want something like GC, for your own sanity, but it
won't improve the total memory usage much if you don't limit the graph
size some other way.


(Log in to post comments)

Pauses

Posted Jul 15, 2010 17:06 UTC (Thu) by nix (subscriber, #2304) [Link]

Link-time optimization works by having the compilation phases drive the compilation to the GIMPLE tree stage, then writing it out and having the link stage invoke the compiler again (via collect2, or via a linker plugin: the latter is preferable because then you can run it over the contents of .a archives as well) to read in all the GIMPLE trees and optimize them. At that stage you've got all the trees written out by all the compilation phases in memory at once. And a good few optimizations will, unless bounded explicitly, start using up crazy amounts of memory when hit with large programs all at once. (Of course they *are* bounded explicitly. At least all of them that anyone noticed are.)

-combine -fwhole-program has a similar memory hit, for similar reasons.

C++ is a good bit worse memory-wise than C, mostly because the headers make C headers look like child's toys: they have a lot of definitions in them and a large number of very complex declarations. I've known compiling *single C++ files* to eat a couple of gigabytes before (although, admittedly, those were exceptional, things like large yacc parsers compiled with C++: mysql's SQL parser is a good example).


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds