Reference counting has a number of advantages. Namely:
1) Lowest space overhead of any garbage collection scheme.
Memory is released immediately after it is no longer used, not "whenever I get around to it."
I would rather not see the OOM killer kill my processes just because someone in the kernel was slow to release what he allocated.
2) The best cache locality of any garbage collection scheme.
In the kernel, we always have to explicitly say when we're done with an object. That will always involve making some change to the object in memory. While it's loaded into the CPU's cache, why not free it as well?
If you sweep the trash under the rug, you just have to spend more effort puling it out later. Why not put it in the trash can the first time around?
Have you looked at a chart of gates per CPU versus DRAM clock lately? They are growing at different rates-- those curves have different big-Os with respect to time. Algorithms like copying garbage collecting which involve touching lots and lots of memory look increasingly old-fashioned. They are guaranteed to ruin the cache. In userspace you also have the problem of the GC accessing memory paged out to disk, which leads to terrible performance.
3) It allows the programmer to specify a "destructor" type callback which can run when the resource is no longer needed.
This is a nice bonus.
This was one of the best features of C++. The fact that Java lacks real destructors is an endless source of pain for people working in that language, and motivated the inclusion of a similar feature in C#.