There has been discussion about GC being faster than general purpose memory allocators, and I tend to like that idea and agree with it. Also, I know that most people complaining against Java are not really able to do better than a GC and write leak-free C code, as they assume they can (and by "most people" I already exclude anybody reading LWN - I'm thinking to most lamers or CS students).
But it doesn't apply in kernel space, apart from existing usage of manual refcounting, for several reasons:
- GCs depend on wasting memory space. A copying GC needs twice the space actually used by the application, by definition (one would use it just for the young generation of a generational GC, though). Refcounting uses exactly the needed space, no more, no less.
- a recent paper argued that a GC needs to use 5x the used memory to be faster than manual allocations, so that the cost of a scavenging (which is always big, even with generations) is paid infrequently enough. That's not an option in kernel.
- incremental/real-time GC (which do not stop execution) are much slower if fine tuned, and extremely hard to get right.
- kernel devs are extremely concerned about cache impact, even because they'd like to leave the caches for app usage as much as possible. And any GC which is not refcounting is notoriously bad about locality.
- when you work in a dynamic language and already have a VM, you might as well implement a GC, but here there is no VM whatsoever.
- All that is with general purpose memory allocators. The kernel makes extensive use of the SLAB/SLUB interface instead of kmalloc(), and that allocates fixed-size objects.
Even kmalloc is implemented on top of that: allocation sizes are rounded up mostly to the next power of 2, with a few exceptions. A 700-byte object will actually use 1024 bytes. See include/linux/kmalloc_sizes.h for the actual existing sizes.
- All the discussion argues about automatic refcounting. That means incrementing refcounts on each function call because you are pushing parameters on the operand stack. No kidding - CPython is like that. Multithreaded CPython was 2x slower because of atomic refcounts and locks around name lookups, and that was CPython 1.5.2 IIRC. Nowadays the rest is more optimized (even if it still hugely sucks), so the same slowdown (for the same number of refcounts) would have a given impact.
Now, the kernel obviously does not use refcounting like that. If they can live without recursive locks, they can also avoid incrementing a refcount in a thread which already owns a reference.