Worst case performance and "impossibly unlikely" conditions
Worst case performance and "impossibly unlikely" conditions
Posted Jan 12, 2012 2:08 UTC (Thu) by rbrito (guest, #66188)In reply to: Worst case performance and "impossibly unlikely" conditions by rgmoore
Parent article: Denial of service via hash collisions
Thanks for pointing it out.
The article does (at least to my non-native English-speaker interpretation) give the impression that the "bad performance is DOS security problem" was the case for the change.
Posted Jan 12, 2012 3:42 UTC (Thu)
by rgmoore (✭ supporter ✭, #75)
[Link]
I think I may have inadvertently confused you. The sort implementation I'm talking about is a different example than the hash implementation mentioned in the parent article, where the performance regression is only likely to be important in the context of a deliberate DOS attack. Perl sorting is just a real world example of an algorithm encountering pathological data far more often than you might naively expect. Practical experience with pathological data forced them to start considering worst-case performance rather than just average case.
FWIW, the Perl documentation also points out that merge sort uses fewer comparisons than quicksort in the typical case (and no more in the worst case), so it may be faster if the comparisons themselves are expensive. One more example of why it's important to have several algorithms with different and well understood performance characteristics for important tasks.
Worst case performance and "impossibly unlikely" conditions