This way, you can push the memory usage down about as far as a normal compilation for about 90% of the time. Some of the work is nonparallelizable, though, and that will still require a good bit of memory.
(The description of what LTO does is also somewhat unclear. It works by serializing the GIMPLE intermediate representation into special sections in the object files before carrying out most optimization, then, at link time (via the compiler driver, or, for .a files, via a special linker plugin supported by gold and by recent versions of GNU ld) reading the whole lot back in and then running the whole lot through the optimizer almost as if it were a single source file. But that 'almost' covers a multitude of sins, and in order to be useful in practice the thing had to be parallelizable as well. It was a *lot* of work to make GCC capable of this, and I for one believed it could never be made to work reliably. I was wrong.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds