Posted Jun 27, 2010 17:05 UTC (Sun) by nix
In reply to: Pauses
Parent article: The Managed Runtime Initiative
That's why I was talking about compilation stages, not optimisation stages.
There is no meaningful distinction, after parsing, between translation and optimization. There are only compilers that optimize more, and compilers that optimize less.
I'm also sure that GCC does not use hundreds of megabytes to keep very complex graphs in memory.
You're hilariously wrong. (Seriously, what do you think it's doing
with the gigabytes of space it can sometimes use? Playing electronic tic-tac-toe? Of course
it's holding graphs, and other things some of which it indeed can never throw away e.g. hordes of decl nodes for all the declarations in all the header files it's parsed...)
I'd rather have GCC spend its time on using 10 times the memory for compiling small files fast than waste time on reducing memory usage by running slower and not really achieving it anyway.
You'd say that until you tried to compile something and it failed because you ran out of memory, or it started taking a thousand times longer than otherwise because you were pushed into swap. Also, the more we reduce memory usage, the better our (still quite awful) cache utilization becomes.
You seem to assume that GCC would use a lot more memory if it didn't use GC. I think it's rather that GC doesn't improve the situation.
No. If it never garbage collected and didn't free(), as you suggested, it would use incredibly much more memory (many times more than it does now).
As for what would happen if it was designed without GC, well, we have actual evidence of that because pre-3.0 it did indeed have no GC, and used manual memory management for everything, with explicit lifetimes. Maintaining the object lifetimes was a nightmare and there were countless bugs raised due to SEGVs caused by something being freed when it shouldn't.
In my experience memory leaks almost never happen and when they do happen they're easily fixed.
That's true until you have to manage complex data structures. It's also only true if your memory leak (or, worse, double-free) isn't tied up with an API bug somewhere and not fixable without changing the API. I've seen that too and it was a nightmare to fix (we had to change every caller and there were hundreds, ick, never again).
GC may be more inefficient than manual memory management (theoretically this is not true, but I've never heard of a situation where that theory is turned to practice) and is surely not right for every situation, but it wipes out all these problems at a stroke.
And to be honest if asked if I'd rather have a program that ran more slowly but used GC or a program that didn't work I'd surely pick the former.
to post comments)