When you get an out-of-memory error in an overcommited state there is no way to free up resources, nor ask the user how to proceed (and anyway that would generally require allocations). What happens is that on out-of-memory the oom killer wakes up and kills a semi-random process, with you having no say in this at all.
For the case where you have resources that can be safely freed in an out of memory situation the right thing to do is not OOM from allocation at all, but rather have some kind of signal for memory pressure when memory is tight (but not full). Then apps could handle this by cleaning up caches and other resources. That way you will not run into the OOM killer problem.
There is one kind of allocation failure that is not oom-killer related though, and thats where a single allocation is larger than the physical memory or the mappable region. This can happen for instance if you're reading in some random user file (say an image) and it happens to decode to a 8 gigabyte array (maybe because its an exploit, or just large). In these kinds of situation I think it makes sense to check for allocation failures, and glib does in fact have a call for that (g_try_malloc).
However, in most cases (like allocating internal know sized objects) I'm purely in the abort-on-oom school, since adding all the complexity (both to your code and to users of your library) means more bugs, and doesn't help anyway (since oom doesn't get reported, the kernel just kills some process instead). Of course, as david said in the article, there are of course exceptional situations, like core system software (init, dbus, etc) where we can't just have it die and where the complexity is worth it.