Unless its changed a lot since I last read the source, /dev/random and /dev/urandom are the same, they both use a PRNG to output values based on an internal state that is updated as the devices are read or written.
The only difference is that the kernel keeps a wild-assed guess about how many bits of entropy are in the pool. This is information that is impossible to know, it can merely be guessed, though it also does a bit of checking for obvious statistical non-randomness. Since its possible to have 0 entropy data that is indistinguishable from random, this check is a pure heuristic.
What /dev/random does differently is that when you read, it checks its current guess as to the entropy, and won't give you anything if the number is small.
I agree with the advice to always use /dev/urandom. If you really want to be sure you are using random data, you should be using a hardware RNG, not the output of a PRNG, and you shouldn't be assuming that random is better based on some heuristic guesses that aren't backed up by anything.
The reason its a default for some things is ass-covering, as far as I can tell: if you always appear to be doing the most conservatively secure thing, you can't be criticized later.