In programming we rarely worry about worst-case performance of an algorithm. We are happy to take the code that has good average-case performance over the code which is usually ten times slower but has fewer pathological cases. For example, almost every scripting language implements dictionaries using a hash function instead of some kind of binary tree. The slower ordered-tree based dictionary implementations are confined to "heavier" languages such as C++; the longer the programmer's beard, the better the chance that worst-case performance has been considered carefully.
Except for safety-critical systems, this has usually been seen as okay. But when you have an attacker deliberately setting out to break your code, the worst case becomes more important. You can no longer appeal to luck to rule out "incredibly unlikely" collisions or coincidences, unless you have ensured that any attacker won't be able to twist fortune in his favour.
Another example is conservative garbage collections for languages like C where the runtime does not have special support for GC. Most of the time it is very unlikely that you will confuse the garbage collector by having binary data which looks like pointers to objects on the heap (especially on a 64-bit system). But if you take data from the big wide world, it's quite plausible that an attacker could DoS your application by feeding it binary data that confuses the GC. Again the approach works well enough in practice, until it doesn't.