When you start dealing with very, very large applications -- like an entire browser, office suite, Photoshop-esque image editor, long-running server process, game engine, or many scientific tools -- then the number of allocated objects is very high, the total amount of allocated memory is often high, and the memory manager deals with an awful lot of allocation churn. These things are all really bad for most garbage collection algorithms.
They're really not all that great for classic C/C++-style manual memory management, either. A particularly good GC will actually be faster overall than even a well coded manual memory management using application. The problem is that the manual memory management approach spreads all of its time out amongst every allocation and deallocation. That means that there is never a big pause on one allocation only, and it also means that all the memory manager work happens at places in the code the programmer can easily identify. With automatic memory management, a big chunk of the garbage collection process (possibly all of it) can happen at any allocation without any deterministic way to identify it. Even with an incremental collector, this can lead to long pauses happening right in the middle of a relatively speed-sensitive bit of code; it's not a big deal if the GC happens during general idle loop processing in a GUI app but it can be noticeable if it kicks in during animation processing, for example.
So anyway, yeah, all garbage collected languages have the problem, just most garbage collected languages don't have so much as a single app on the scale of what Java or C# are frequently used for.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds