> (actually, to be maximally pedantic it's almost never correct because
> most passes annotate or shuffle some tree somewhere: if it did nothing
> all the time it would be a pretty useless pass.)
That's why I was talking about compilation stages, not optimisation
stages. I don't think you can free any significant amount of memory
between the latter ones, because you never know what may be needed
in the future. Even if you just replace multiple nodes with fewer ones
you can't save much memory because of fragmentation. At best it's a
factor 2, in which case you have to manage overall memory consumption
by reducing your working set anyway.
I'm also sure that GCC does not use hundreds of megabytes to keep
very complex graphs in memory.
I'd rather have GCC spend its time on using 10 times the memory for
compiling small files fast than waste time on reducing memory usage
by running slower and not really achieving it anyway.
> But it seems you don't get it. I know GCC already has a GC: I'm using it
> as a counterexample to your claim that GC is unnecessary because you can
> always simplify your datastructures. You cannot always do that (although
> you often can and it should be the first thing you try, if practicable),
> and doing it by hand (i.e. explicitly triggering GC cycles) is often
> impractical too.
No, I don't get it. You brought GCC as an example of something that people
complain about that uses too much memory, and of something that uses
complex datastructures that can't be easily handled with manual memory
handling. What I'm trying to say is that GC doesn't solve any real problems,
it's just convenience for the lazy, but when you do use too much memory
(bloat or leak) it won't help you at all. When memory handling is important
then you have to put it into your program design on a higher level. You seem
to assume that GCC would use a lot more memory if it didn't use GC. I think
it's rather that GC doesn't improve the situation.
> Managing (or rather mismanaging) object lifetime is the part of memory
> management that is most likely to cause bugs in C programs (less so in
> C++); GC largely eliminates that as a cause of bugs.
In my experience memory leaks almost never happen and when they do
happen they're easily fixed. At least in C: When it happens with Java it's
a bit harder because then you have to hunt down the dangling references
and you can't use Valgrind. Firefox is a different story, but even a perfect
GC wouldn't have saved it. (But the problem of C++ programs is that C++
adds its own additional complexity without making anything easier, with
as a result more bugs and memory leaks in general.)