User: Password:
|
|
Subscribe / Log in / New account

Worst case performance and "impossibly unlikely" conditions

Worst case performance and "impossibly unlikely" conditions

Posted Jan 13, 2012 20:49 UTC (Fri) by joey (subscriber, #328)
In reply to: Worst case performance and "impossibly unlikely" conditions by rgmoore
Parent article: Denial of service via hash collisions

Perl's sort does sort the same list the same way twice, despite being unstable. If the pivot is chosen randomly, it must be pseudorandom with a seed of the input list.

To test this I used code like this, which exercises the instability of the sort:

perl -MData::Dumper -e 'print Dumper sort { $a->[0] <=> $b->[1] } ([1,1],[2,3],[1,2],[2,1],[2,4])'

perl -le 'sub c { $a->[0] <=> $b->[1] } @l=([1,1],[2,3],[1,2],[2,1],[2,4]); print sort(\&c, @l) == sort(&c, @l)'


(Log in to post comments)

Worst case performance and "impossibly unlikely" conditions

Posted Jan 13, 2012 23:20 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link]

That's because they changed the default sort to merge sort, which is stable, starting in 5.8. If you want to test whether their quicksort is stable from run to run over the same data, you'd need to use sort '_quicksort'; no sort 'stable'; or the like to force it to use quicksort.


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds