User: Password:
|
|
Subscribe / Log in / New account

Worst case performance and "impossibly unlikely" conditions

Worst case performance and "impossibly unlikely" conditions

Posted Jan 12, 2012 2:08 UTC (Thu) by rbrito (subscriber, #66188)
In reply to: Worst case performance and "impossibly unlikely" conditions by rgmoore
Parent article: Denial of service via hash collisions

> Perl switched sort implementations because they saw bad performance as a performance problem, not because they saw poor performance as a possible DOS problem.

Thanks for pointing it out.

The article does (at least to my non-native English-speaker interpretation) give the impression that the "bad performance is DOS security problem" was the case for the change.


(Log in to post comments)

Worst case performance and "impossibly unlikely" conditions

Posted Jan 12, 2012 3:42 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

I think I may have inadvertently confused you. The sort implementation I'm talking about is a different example than the hash implementation mentioned in the parent article, where the performance regression is only likely to be important in the context of a deliberate DOS attack. Perl sorting is just a real world example of an algorithm encountering pathological data far more often than you might naively expect. Practical experience with pathological data forced them to start considering worst-case performance rather than just average case.

FWIW, the Perl documentation also points out that merge sort uses fewer comparisons than quicksort in the typical case (and no more in the worst case), so it may be faster if the comparisons themselves are expensive. One more example of why it's important to have several algorithms with different and well understood performance characteristics for important tasks.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds