LWN: Comments on "Denial of service via hash collisions" https://lwn.net/Articles/474912/ This is a special feed containing comments posted to the individual LWN article titled "Denial of service via hash collisions". en-us Sat, 18 Oct 2025 11:03:29 +0000 Sat, 18 Oct 2025 11:03:29 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net only a vulnerability in djbx33a? https://lwn.net/Articles/594785/ https://lwn.net/Articles/594785/ rurban <div class="FormattedComment"> Every single hash table which uses unsorted linear linked lists is attackable by this, because the attack goes against the unsorted list, and the hash function can not help much. I wonder why everybody wants to improve X when Y is the problem, not X.<br> The random seeding also does not help much, as the seed can be attacked also (e.g. "REMOTE ALGORITHMIC COMPLEXITY ATTACKS AGAINST RANDOMIZED HASH TABLES", N Bar-Yosef, A Wool - 2009 - Springer). And if you've got a seed you get the collisions pretty easily with the right tools. You don't even need to use the \0 trick to which perl (&lt;5.18) were and most other languages still are susceptible.<br> <p> See <a rel="nofollow" href="https://github.com/rurban/perl-hash-stats">https://github.com/rurban/perl-hash-stats</a> where I added some stats and analysis for the avg. and worst cases, and how to fix this problem.<br> <p> Keeping sorted bucket collisions or perfect hashes are the easiest fixes. It depends on the usage scenario and hash table sizes. Google uses perfect hashes, languages and smaller usages (i.e. the linux kernel, caches) should typically use sorted bucket collisions. They also improve cache lookup performance as with open addressing. Robin-hood hashing also looks good theoretically, but I haven't tested it yet against such attacks.<br> <p> Detecting hash flooding as done by DJB's DNS server or limiting MAX_POST_SIZE as done with PHP is also fine.<br> <p> </div> Mon, 14 Apr 2014 22:04:03 +0000 This is definitely not true of bucket hashes, https://lwn.net/Articles/477399/ https://lwn.net/Articles/477399/ nix <div class="FormattedComment"> Yes. It's not true. Those examples use pathetically tiny hashes (which were reasonably sized back in the 80s when Pick was big). These days, with the memory hierarchy being what it is, data sizes being what they are, and rehashing requiring massive pointer copying at best... well, just you try to rehash a hash containing fifty million elements and tell me that it'll not be slow enough to notice. (And this is not particularly large as such things go.)<br> <p> </div> Thu, 26 Jan 2012 18:09:29 +0000 Actually it will https://lwn.net/Articles/476666/ https://lwn.net/Articles/476666/ ekj <div class="FormattedComment"> I guess I was being unclear.<br> <p> What I meant is that in typical use, producing hashes to use for inserting and/or looking things up in dictionaries is not a major performance-impact for Python.<br> <p> I don't doubt that hashes can be produces quicker by dedicated hardware, but if you're using a small fraction of your time for that purpose to start with, then the gains from reducing it, are modest.<br> <p> It'd be interesting benchmarking a variety of workloads in a variety of languages to see what fraction of time is used for hashing though, because I'm really just guessing here, and I could be wrong.<br> </div> Mon, 23 Jan 2012 12:40:29 +0000 Actually it will https://lwn.net/Articles/476664/ https://lwn.net/Articles/476664/ khim <blockquote><font class="QuotedText">Putting the thing in hardware wouldn't really change that.</font></blockquote> <p>But it would! Good hash functions are pretty hard to implement in software because you basically need to randomly shuffle bits around - and it's hard to do that in software. In hardware it's trivial.</p> <p>AES-NI instructions have sustained speed of about 1byte/tick (of course small strings are slower). It'll be interesting to see how large of a hit it'll produce in python, for example. It will probably be slower then current hash, but difference may be small enough for real usage.</p> Mon, 23 Jan 2012 12:30:53 +0000 About Future CPUs https://lwn.net/Articles/476642/ https://lwn.net/Articles/476642/ ekj <div class="FormattedComment"> Sorting integers is O(n). It's just the general sorting-problem, where your operations are limited to compare() and swap() which has n*log n as a lower bound.<br> <p> The problem isn't that sorting is heavy work, it typically isn't. <br> <p> The problem is that with certain pathological input (malicious attacks) sorting dict-insertion and dict-lookup can be very much slower than expected.<br> <p> This can be avoided with different choice of hash-function for the dictionary, but that will tend to be slower for the typical (non-malicious) case.<br> <p> Putting the thing in hardware wouldn't really change that.<br> <p> </div> Mon, 23 Jan 2012 09:24:42 +0000 About Future CPUs https://lwn.net/Articles/476621/ https://lwn.net/Articles/476621/ XERC <div class="FormattedComment"> If there are graphics accelerators, which nowadays are integrated with the CPU chip, then why can't there be text accelerators for web servers and alike?<br> <p> For example, there might be some sort of a CPU instruction that is specially meant for hardware based sorting of integers in a given memory range. The instruction can impose a requirement that the memory range is small enough to fit the CPU's L1 cache and the maximum value for the memory region size might be available with one other CPU instruction. <br> <p> If graphics cards were inserted to PC-s as an add-on, then the web servers could use some text-cards that use hardware based sorting for UTF-8 text. Some open-source, system-wide, C library might just abstract away the hardware so that PC-s that don't have the special hardware, do it on the main CPU and everyone would be happy.<br> <p> After all, that's how historically the floating point arithmetic got integrated to the main CPU and that's how the graphics accelerators (GPU-s) got integrated with the most modern CPU chips.<br> <p> Given the popularity of hash-tables in modern programming languages, hardware support for some seed based, not necessarily cryptographically suitable, hash function is also pretty welcome. After all, CPU-s are for running software, not the other way around.<br> </div> Mon, 23 Jan 2012 01:09:25 +0000 It was already explained... https://lwn.net/Articles/476533/ https://lwn.net/Articles/476533/ khim <blockquote><font class="QuotedText">Except that, is there a pathological case for heap sort?</font></blockquote> <p>Sometimes it's good idea to actually look up further <a href="http://lwn.net/Articles/475138/">upthread</a>.</p> <blockquote><font class="QuotedText">It's actually a very well behaved sort.</font></blockquote> <p>No, it's absolutely <b>not</b>. It's quite good WRT to comparisons and swaps, but it's absolute disaster WRT to memory access. If you data fit in L1 cache (16K-32K) it works just fine, if you hit L2 and/or L3 cache you start doing worse then merge sort or quick sort, but difference is tolerable (typically less then 2x). The real disaster happens once you hit main memory: to read one number from random address in memory you need about 250-300 ticks (and this is what happens with heapsort because prefetch can not cope with it's memory access patterns), while sustained sequential access gives you next 32bit number in 2-3 ticks. Since some part of heap will be in cache even for large arrays heapsort does not suddenly become 50-100 times slower then mergesort when you pass L3 border, to get to this level of difference you need hundreds of gigabytes of data.</p> Sun, 22 Jan 2012 00:54:25 +0000 Heapsort is useless in real world... https://lwn.net/Articles/476421/ https://lwn.net/Articles/476421/ Wol <div class="FormattedComment"> Except that, is there a pathological case for heap sort? It's actually a very well behaved sort. Setting up the heap is the most unpredictable bit, and the most pathological case (an already-sorted array) isn't that much worse than a random array.<br> <p> And once the heap is created, it's completely predictable. For a heap of 2^x items, it will require x comparisons and x swaps to get the next value. (Give or take an off-by-one error on my part :-)<br> <p> Cheers,<br> Wol<br> </div> Fri, 20 Jan 2012 22:45:09 +0000 Heapsort is useless in real world... https://lwn.net/Articles/476283/ https://lwn.net/Articles/476283/ ekj <div class="FormattedComment"> If the "process data" step is constant time - i.e. the work performed to process one item is independent of how many items exist, then that's O(n).<br> <p> If the pathological sort-case is O(n^2), then no matter how large the constant separating these two cases is, for some amounts of items, the sort will dominate.<br> <p> You might have a "process data" step that takes 50ms, and a sort that you expect to take 0.2ms * n * log2(n) and conclude that processing a million items, should take ~21 minutes. (about 20 minutes for processing, 1 minute for sorting)<br> <p> If pathological data causes that sort to instead take 0.2ms * n * n time, then you're looking at 20 minutes of processing, and 6 years of sorting, which is clearly likely to be a DOS.<br> <p> Forcing n^2 where n*log2(n) is expected is a huge deal for largish data-sets (a million items is large, but not gargantuan), it ups the workload by a factor of 50000. Thus even if your sort *was* a small fraction of the expected work, it's now dominant.<br> </div> Fri, 20 Jan 2012 07:55:46 +0000 This is definitely not true of bucket hashes, https://lwn.net/Articles/476251/ https://lwn.net/Articles/476251/ Wol <div class="FormattedComment"> Why?<br> <p> The Pick database is based on bucket hashes, and modern implementations will resize so fast the user won't notice. "time is proportional to delta data" - given a 4k bucket it will take EXACTLY the same time to double the file size from 4K to 8K, as it will to increase the file size from 1M to 1028K.<br> <p> Look up "linear hashing" in Wikipedia.<br> <p> Cheers,<br> Wol<br> </div> Fri, 20 Jan 2012 01:58:48 +0000 Heapsort is useless in real world... https://lwn.net/Articles/476247/ https://lwn.net/Articles/476247/ Wol <div class="FormattedComment"> I'd say heapsort is also a very good choice if you want to process data in sorted order. Ie "get first item, process, get next item, process, rinse, repeat".<br> <p> But there you don't care too much about time and "pathological" cases, because sorting is a minor part of your costs.<br> <p> Cheers,<br> Wol<br> </div> Fri, 20 Jan 2012 01:45:03 +0000 Another note... https://lwn.net/Articles/476070/ https://lwn.net/Articles/476070/ khim <p>There are another interesting fact related to <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a>: k must be at least 5 <b>only</b> because algorithm looks for <i><b>k</b>th largest element</i>. In <b>this</b> case “k == 3” just does not work (as I've noted above). However if want to find <b>median</b> then not only "k == 3" works, it usually works better then “k == 5”. This is true because in the "find median" task you can throw away not just “top left” <b>xor</b> “bottom right” elements (as pictured in <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Properties_of_pivot">Wikipedia's illustration</a>) but you can throw away <b>both</b> “top left” <b>and</b> “bottom right” elements. This will lead to complexity of T(N) ≤ C₁*N*(1 + (⅗) + (⅗)² + …) = O(N) for “k == 5” and to complexity of T(N) ≤ C₂*N*(1 + (⅔) + (⅔)² + …) = O(N) for “k == 3”, but ⅗ (for “k == 5”) comes from “⅕ + ⅖” (for two recursive calls) while ⅔ (for “k == 3”) comes from “⅓ + ⅓” (for two recursive calls). Thus in most cases version with “k == 3” is faster (because recursion depth is smaller), but difference is small and you must correctly handle case of N=3M+1...</p> Thu, 19 Jan 2012 09:23:30 +0000 Yet another small correction... https://lwn.net/Articles/476063/ https://lwn.net/Articles/476063/ khim <blockquote><font class="QuotedText">When choosing a pivot for quickselect with the method I described, you need to have k=5 rather than k=3; otherwise the quickselect can still go n^2.</font></blockquote> <p>Sadly this is not true. <a href="http://lwn.net/Articles/475511/">Your argorithm</a> will <b>still</b> introduce disbalance on each step - even with "k == 5". Disbalance will be smaller (2+0.3N/3+0.7N instead of 1+⅓N/2+⅔N), but recursive calls will still compound it thus the whole algorithm will have larger then O(N log N) complexity (most probably O(N²) with "evil source").</p> <p><a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> uses <b>two</b> recursive calls to battle this phenomenon: it finds "more-or-less Ok median" (30%/70%) using first recursive call with 0.2N elements and then it "fixes it" using another recursive call with no more then 0.7N elements. <b>Disbalance is fixed at each step thus it does not grow beyond certain point (30%/70%) no matter how many steps there are</b> - and the whole thing needs O(N) operations: T(N) ≤ c*N*(1 + (9/10) + (9/10)² + …) = O(N). If you'll use "k == 3" then your first pass will use ⅓N elements and second pass will use ⅔N elements and this will mean T(N) ≤ c*N*(1 + 1 + …) ≄ O(N).</p> Thu, 19 Jan 2012 08:42:57 +0000 Is ignorance bliss? https://lwn.net/Articles/476055/ https://lwn.net/Articles/476055/ cmccabe <div class="FormattedComment"> * Some people choose to call quickselect + the clever pivot selection algorithm "median of medians." Yes, this is confusing, given that finding the median of medians is only one step of the process. But it's not worth getting excited about.<br> <p> * The clever pivot selection algorithm is still recursive. Yes, it's a recursive algorithm within another recursive algorithm. We are very clever, aren't we.<br> <p> * When choosing a pivot for quickselect with the method I described, you need to have k=5 rather than k=3; otherwise the quickselect can still go n^2.<br> <p> * Your prose reminds me "time cube." But that's probably because I was "educated stupid and evil."<br> </div> Thu, 19 Jan 2012 08:02:42 +0000 Worst case performance and "impossibly unlikely" conditions https://lwn.net/Articles/476022/ https://lwn.net/Articles/476022/ cmccabe <div class="FormattedComment"> This seems like the kind of thing where benchmarking is your friend. Luckily, it's fairly easy to go back and forth between red-black trees and splay trees.<br> <p> I'm still wary of splay trees because they generate dirty cache lines which need to be written back to main memory at some point. That eats into the available memory bandwidth. And of course, even if all of your accesses are single-threaded, there's always the possibility of false sharing.<br> </div> Wed, 18 Jan 2012 23:36:06 +0000 Denial of service via hash collisions https://lwn.net/Articles/475774/ https://lwn.net/Articles/475774/ akostadinov <div class="FormattedComment"> What particular fix would you suggest for JAVA that is not possible with current standard constraints?<br> I have the impression the HashMap class (and maybe a couple more classes) can be made safer because they are widely used in UI frameworks and can be easily exploited. But I don't see anything preventing the HashMap implementation from having hash algorithm changed. Also it could be easily made dynamic through the rehash() method (i.e. change algo on rehash on certain conditions).<br> </div> Tue, 17 Jan 2012 16:53:41 +0000 Is ignorance bliss? https://lwn.net/Articles/475756/ https://lwn.net/Articles/475756/ khim <blockquote><font class="QuotedText">Let's derive the running time of the median of medians algorithm.</font></blockquote> <p>Let's do.</p> <blockquote><font class="QuotedText">Since you can use Google, you already know that the answer is O(n). But do you know why?</font></blockquote> <p>Yes, I do. I also know that your contraption has nothing to do with medians of medians algorithm - that's why I was confused.</p> <blockquote><font class="QuotedText">Median of medians is a divide-and-conquer algorithm. In each stage of the recursion, we split the array into K subarrays and recurse on them. To combine the results of those recursions, we do a constant amount of work.</font></blockquote> <p>Oops. Ah, now I see. Sorry, I missed the fact that you call median_of_medians recursively. Very embarrassing: I did the same mistake you did - have looked on the name of the algorithm and assumed it just picks medians of pieces and then selects median from these.</p> <p>Well... this algorithm is linear, all right. The only problem: it does not guarantee linear complexity of quicksort! You basically split array in two <b>uneven</b> pieces, then combine six (if k == 5 then ten) such arrays to organize bigger array and you gurantee that at least two pieces go to the left and at least two pieces go to the right. This means that each recursion step potentially amplifies the disproportion. In the end you can have two pieces of quite disproportionate sizes. It's not clear if you can organize array in such a bad fashion as to push complexity of quicksort back to O(N²) but this looks highly probable.</p> <p>The property of pivot produced by <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> is quite different: it's always between 30% and 70% elements and these percentages do not depend on the number of recursive calls. Why? <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> <b>also</b> introduces disproportions at each step, right? Yes, but it includes mechanism which <b>fixes</b> these disproportions. <b>This</b> is what guarantees O(N) complexity for finding true median and this is what guarantees O(N log N) complexity for quicksort.</p> <p>Do you have any proof that your “median of median of median…” algorithm can not produce bad results at each step of quicksort? If not then this will put the whole excercise in the same bucket as “median of three” and not in the bucket of <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> which <b>guarantees</b> O(N) complexity <b>and</b> guarantees that quicksort will not go to recursion lever deeper then log₂N. I've assumed that your code at least keeps the second property, but apparently you were more concerned with first. My bad.</p> Tue, 17 Jan 2012 08:39:33 +0000 only a vulnerability in djbx33a? https://lwn.net/Articles/475738/ https://lwn.net/Articles/475738/ wahern <div class="FormattedComment"> Broken for static initialization vectors, at least:<br> <p> <a href="http://www.team5150.com/~andrew/blog/2007/03/breaking_superfasthash.html">http://www.team5150.com/~andrew/blog/2007/03/breaking_sup...</a><br> <p> </div> Tue, 17 Jan 2012 02:46:31 +0000 Denial of service via hash collisions https://lwn.net/Articles/475720/ https://lwn.net/Articles/475720/ NAR <div class="FormattedComment"> I wonder if the standardized algorithm was (is?) necessary for serialization...<br> </div> Mon, 16 Jan 2012 22:25:13 +0000 And the saga continiues! https://lwn.net/Articles/475715/ https://lwn.net/Articles/475715/ cmccabe <div class="FormattedComment"> Let's derive the running time of the median of medians algorithm. Since you can use Google, you already know that the answer is O(n). But do you know why?<br> <p> Median of medians is a divide-and-conquer algorithm. In each stage of the recursion, we split the array into K subarrays and recurse on them. To combine the results of those recursions, we do a constant amount of work.<br> <p> So the running time for an array of length N is<br> T(n) = kT(n/k) + C<br> where k and C are constants. Usually K=5.<br> <p> Luckily, we can solve this recurrence with case one of the master theorem. This gives a running time of O(n).<br> <p> What if, instead, we did O(n) work to combine the results of the recursions? This is essentially what you are claiming.<br> <p> Then the recurrence would be <br> T(n) = kT(n/k) + Cn + D<br> where k, C, and D are constants.<br> By case 2 of the master theorem, the running time would be O(n log n).<br> <p> Incidentally, this is the reason why quicksort's running time is O(n log n)-- because it does O(n) work before doing each recursion. In quicksort's case, k = 2.<br> <p> Anyone can use Google and find the running time of an algorithm. But unless you can derive it, you do not truly understand it. Perhaps you need to do a little bit less talking and a little bit more listening.<br> </div> Mon, 16 Jan 2012 22:05:04 +0000 only a vulnerability in djbx33a? https://lwn.net/Articles/475621/ https://lwn.net/Articles/475621/ wingo <div class="FormattedComment"> What is the feasibility of this attack on other standard hash functions, I wonder?<br> <p> Guile for example (on its master branch) uses Bob Jenkin's lookup3:<br> <p> <a href="http://burtleburtle.net/bob/hash/index.html#lookup">http://burtleburtle.net/bob/hash/index.html#lookup</a><br> </div> Mon, 16 Jan 2012 12:28:10 +0000 Denial of service via hash collisions https://lwn.net/Articles/475612/ https://lwn.net/Articles/475612/ ekj <div class="FormattedComment"> The hash-function used at the core of Python is vulnerable to this. It's trivial to provide "perverse" input where all the items hashes identically, and doing this changes the expected runtime of looking up a element in a dictionary from O(1) to O(n).<br> <p> Potentially, this means you can create a DOS-attack on any python-program that puts user-provided things in a dictionary in such a way that the user influences the key, if the dict is large enough and/or the performance-constraints are tight enough that O(n) instead of O(1) matters.<br> <p> The overhead is fairly high though, in practical testing even a 15-deep collision (i.e. a element that requires 15 attempts to locate) takes double time, compared to an element that's found at the first attempt. If it's linear (it should be, but I ain't actually tested that), then worst-case a 15-element dict would have only half the predicted performance - which is unlikely to cause a DOS.<br> <p> But a 150 element-array could have 1/10th the expected performance, and "perverse" 1500-element dictionary could require 100 times as long to process as expected - and that starts looking like a potential DOS.<br> </div> Mon, 16 Jan 2012 10:31:56 +0000 Lowering the bar and hitting people with it https://lwn.net/Articles/475555/ https://lwn.net/Articles/475555/ man_ls I'm so glad khim plonked me... Sun, 15 Jan 2012 14:13:34 +0000 And the saga continiues! https://lwn.net/Articles/475544/ https://lwn.net/Articles/475544/ khim <blockquote><font class="QuotedText">It is true that I could have implemented find_median more efficiently. However, this does not affect the overall time complexity of the algorithm. We never find the median of more than 3 numbers at a time.</font></blockquote> <p>Just how many times can you say gibberish without checking facts? Here is your call which processes more then 3 numbers at a time:<br /> <pre> return find_median(m) </pre>Here size of <tt>m</tt> is N/3 and O(N/3 log(N/3)) is O(N log N), not O(N) or O(1), sorry.</p> <blockquote><font class="QuotedText">As far as I know, we all agree on the fact that O(N log N) worst-case quicksort is possible,</font></blockquote> <p>True. <blockquote><font class="QuotedText">O(N) quickselect is possible</font></blockquote> <p><b>This</b> remains to be seen. We can use <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">median of medians algorithm</a> to find pivot element for quickselect - but this is kinda pointless because <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">median of medians algorithm</a> can produce result directly. Formally it'll be quickselect which produces result in O(N) but on practice it's be just a useless addendum to another algorithm which solves the task just fine on it's own. If someone can offer useful way to find “good” pivot for quickselect (which does not use another algorithm capable of solving the task on it's own) with guaranteed complexity O(N) - it'll be interesting. So far I've not seen such algorithms.</p> <p>Note that while <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">median of medians algorithm</a> is based on quickselect it's quite distinct from quickselect. For example quickselect recursively calls itself <b>once</b> on each step while <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">median of medians algorithm</a> calls itself <b>twice</b> on each step.</p> <blockquote><font class="QuotedText">and we all understand how median-of-medians works.</font></blockquote> <p>And this is yet another <b>NONSENSE</b><br /> 0. Yes, we (as in: <b>HelloWorld</b>, <b>me</b>, and <a href="http://lwn.net/Articles/475535/">now even <b>nybble41</b></a>) understand how it works. <b>You</b> still refuse to accept it.<br /> 1. Your algorithm produces median of medians in O(N log N), not in O(N).<br /> 2. Apparently it's <b>still</b> not obvious for you that you can <b>only</b> find median of medians in O(N) if you can find just median in O(N) - and then you can just use said median as pivot point!</p> <p>When/if you'll understand how <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">median of medians algorithm</a> works you'll understand why you was wrong all along - from your first post in this thread. The fact that you groups include 3 elements strongly suggests that you <b>still</b> don't understand how <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">median of medians algorithm</a> works.</p> <blockquote><font class="QuotedText">By the way, I am probably not going to continue posting to this thread unless someone posts some math that's clearly wrong-- like the above confusion about constant factors.</font></blockquote> <p>Fine with me. The only one who posts “some math that's clearly wrong” in this thread is you, anyway.</p> Sun, 15 Jan 2012 11:42:42 +0000 Read "How To Ask Questions The Smart Way", please... https://lwn.net/Articles/475545/ https://lwn.net/Articles/475545/ khim <blockquote><font class="QuotedText">People might tend to take you more seriously if you refrained from resorting to personal insults.</font></blockquote> <p>If people expect to be spoon-feed then it's their problem, not mine. I remember ages-old principle <i>errare humanum est, perseverare diabolicum</i> very well, thank you very much. I'm human, after all - and sometimes make mistakes. This is why I try to give hints without insults at first. But after few rounds it's time to recall the second part of principle and recall that <a href="http://catb.org/~esr/faqs/smart-questions.html">what we are, unapologetically, is hostile to people who seem to be unwilling to think or to do their own homework before asking questions</a>.</p> <blockquote><font class="QuotedText">Yes, the cited section of the Wikipedia article does start out with the assertion that the algorithm can give the median in linear time. However, without already knowing the answer, that is not at all apparent from the remainder of the description.</font></blockquote> <p>Well, you actually need to read Wikipedia's article (in particular it's good idea to not skip <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Proof_of_O.28n.29_running_time">Proof of O(n) running time</a> and <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Important_notes">Important notes</a> parts which state quite bluntly that <i>the worst-case algorithm can construct a worst-case O(n log n) <a href="http://en.wikipedia.org/wiki/Quicksort">quicksort</a> algorithm, by using it to find the median at every step</i>) and think. Is it such a big problem?</p> <blockquote><font class="QuotedText">Your opponents are not "twitterbrains" just because they expect more authoritative citations…</font></blockquote> <p>Yes, they are. This is the very definition of twitterbrain: the person who expect <b>citations</b>. They don't seek hints, they are not interested in proof, they just want authoritative citations to spread them around without thinking.</p> <blockquote><font class="QuotedText">than a single, rather dense, Wikipedia article without so much as a single hyperlink for further reading.</font></blockquote> <p>This “single, rather dense, Wikipedia article” includes:<br /> 1. Explanation of the algorithm.<br /> 2. Proof of the O(N) complexity.<br /> 3. Direct explanation of how the algorithm can be used in quicksort.<br /> More then enough for anyone with more than half a brain. If you don't trust wikipedia then you can just check the included proof - but apparently this is beyond abilities of “twitter generation”.</p> <blockquote><font class="QuotedText">Note that the Median of Medians algorithm, on its own, does in fact return a result between 30% and 70% of the list, not the true median at 50%.</font></blockquote> <p>And now we are back to square one. I know, I know, the temptation to stop thinking and start clicking and copy-pasting is almost irresistible. But <b>please</b> stop for a moment and <b>think</b>: just how the h*ll <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">median of medians algorithm</a> works? It needs to find <b>true median</b> in a smaller (N/5) array. <b>How can it do it if it does not refer any other algorithm and if it can not find median itself</b>? Answer is simple: "median of medians" is intermediate step which guarantees that recursive call will have less then 70% of elements to deal with. This is where <i>T(n/5) + O(n)</i> in <i>T(n) ≤ T(n/5) + T(7n/10) + O(n)</i> formula comes from. <b>But this is not the end result</b>! The end result is <i><b>k</b>th largest element</i> (and median is, of course, <i>N/2</i>th largest element thus it can be produced by algorithm directly). This is where <i>T(7n/10)</i> in the aforementioned formula comes from.</p> <p>Is it so hard to understand?</p> Sun, 15 Jan 2012 11:42:36 +0000 Are you moron or just play one on TV? https://lwn.net/Articles/475543/ https://lwn.net/Articles/475543/ cmccabe <div class="FormattedComment"> <font class="QuotedText">&gt; Let's see.</font><br> &gt;<br> <font class="QuotedText">&gt; &gt; ...</font><br> <font class="QuotedText">&gt; &gt; list2 = sorted(list)</font><br> <font class="QuotedText">&gt; &gt; ...</font><br> <font class="QuotedText">&gt; Brilliant idea. Not. This single line guarantees that your</font><br> <font class="QuotedText">&gt; median_of_medians function will need at least O(N log N) operations in</font><br> <font class="QuotedText">&gt; some cases. This, in turn, will probably mean that your "outer sort</font><br> <font class="QuotedText">&gt; algorithm" will have at least O(N log²N) complexity, not desired O(N log</font><br> <font class="QuotedText">&gt; N). If you'll use it recursively then situation can be even worse (not</font><br> <font class="QuotedText">&gt; sure if you can push it all the way to O(N²): for that to happen you'll</font><br> <font class="QuotedText">&gt; need to hit “worst case” often enough and I'm not 100% sure it's possible </font><br> <font class="QuotedText">&gt; to organize).</font><br> <p> It is true that I could have implemented find_median more efficiently. However, this does not affect the overall time complexity of the algorithm. We never find the median of more than 3 numbers at a time.<br> <p> In other words, we are doing O(3 log 3) work instead of O(3) work at each recursion. But O(3 log 3) and O(3) are both still O(1).<br> <p> As for the rest of what you wrote-- yes, it looks like the Java situation is more complex than I made it out to be. They do still use some form of quicksort for primitive data types-- probably to get the benefits of in-place sorting.<br> <p> I don't understand the other stuff you wrote at all. As far as I know, we all agree on the fact that O(N log N) worst-case quicksort is possible, O(N) quickselect is possible, and we all understand how median-of-medians works. What exactly are we in disagreement about, if anything?<br> <p> By the way, I am probably not going to continue posting to this thread unless someone posts some math that's clearly wrong-- like the above confusion about constant factors.<br> </div> Sun, 15 Jan 2012 08:18:44 +0000 Are you moron or just play one on TV? https://lwn.net/Articles/475535/ https://lwn.net/Articles/475535/ nybble41 <div class="FormattedComment"> People might tend to take you more seriously if you refrained from resorting to personal insults. Yes, the cited section of the Wikipedia article does start out with the assertion that the algorithm can give the median in linear time. However, without already knowing the answer, that is not at all apparent from the remainder of the description. Your opponents are not "twitterbrains" just because they expect more authoritative citations than a single, rather dense, Wikipedia article without so much as a single hyperlink for further reading.<br> <p> However, after rereading this thread, the Wikipedia article, several other descriptions of the Median of Medians selection algorithm (several of which explicitly supported the 30%/70% argument), and finally the original article which I was able to track down via Google Scholar[1], I am forced to agree that this algorithm can be used (in combination with the quickselect algorithm) to find the true median in linear time.<br> <p> Note that the Median of Medians algorithm, on its own, does in fact return a result between 30% and 70% of the list, not the true median at 50%. Only by using it to select the pivot point for the quickselect algorithm do you get the true median of the list in linear time.<br> <p> [1] <a href="ftp://reports.stanford.edu/www/pub/public_html/public_html/cstr.old/reports/cs/tr/73/349/CS-TR-73-349.pdf">ftp://reports.stanford.edu/www/pub/public_html/public_htm...</a><br> </div> Sun, 15 Jan 2012 03:30:29 +0000 Are you moron or just play one on TV? https://lwn.net/Articles/475514/ https://lwn.net/Articles/475514/ khim <blockquote><font class="QuotedText">Hmm. I didn't expect this to be so controversial.</font></blockquote> <p>There are no controversy. Just pure obstinacy.</p> <blockquote><font class="QuotedText">Here is a quick example of median of medians.</font></blockquote> <p>Let's see.</p> <blockquote><font class="QuotedText">...<br /><pre>list2 = sorted(list)</pre>...</font></blockquote> <p>Brilliant idea. Not. This single line <b>guarantees</b> that your <tt>median_of_medians</tt> function will need <b>at least</b> O(N log N) operations in some cases. This, in turn, will probably mean that your “outer sort algorithm” will have at least O(N log²N) complexity, not desired O(N log N). If you'll use it recursively then situation can be even worse (not sure if you can push it all the way to O(N²): for that to happen you'll need to hit “worst case” often enough and I'm not 100% sure it's possible to organize).</p> <p>Let me remind you the story of whole so called “controversy”:<br /> <a href="http://lwn.net/Articles/475290/">HelloWorld</a>: there are algorithms that can chose the median (and hence the optimal pivot element) from a list in O(n) ⇨ <b>TRUTH</b><br /> <a href="http://lwn.net/Articles/475335/">cmccabe</a>: You're thinking of quickselect ⇨ <b>NONSENSE</b>¹)<br /> <a href="http://lwn.net/Articles/475344/">HelloWorld</a>: There are other algorithms (such as the "median of medians" algorithm) that have a worst-case performance of O(n) ⇨ <b>TRUTH</b><br /> <a href="http://lwn.net/Articles/475419/">cmccabe</a>: The median of medians, though possibly a useful thing, is not the same thing as the median ⇨ <b>IRRELEVANT</b>²)<br /> <a href="http://lwn.net/Articles/475440/">khim</a>: The first result gives you link to <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> - and yes, this algorithm can be used to find median in linear time ⇨ <b>TRUTH</b><br /> <a href="http://lwn.net/Articles/475451/">nybble41</a>: The result of the algorithm is, as the name implies, the median of a list of subgroup medians ⇨ <b>NONSENSE</b>³)<br /> <a href="http://lwn.net/Articles/475489/">khim</a>: The end result is true median ⇨ <b>TRUTH</b>⁴)<br /> <a href="http://lwn.net/Articles/475511/">cmccabe</a>: However, you can use median-of-medians as the pivot selection heuristic for quickselect. If you do this, you are guaranteed O(n) worst case running time for quickselect ⇨ <b>NONSENSE</b>⁵)</p> <p>As you can see the whole controversy is kind of trivial: one side postulates true (and relevant!) facts while other side either says incorrect assertions or correct yet entirely irrelevant facts.</p> <p>I'm not sure why it's so hard for you to accept the fact that <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a>'s goal is <b>not</b> to find median of medians (it <b>uses</b> median of medians as <b>intermediate</b> step) - but, well, that's the root of the whole controversy. If you'll stop feeling indignant for just a moment and spent five minutes to actually take a look on <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> (Wikipedia does pretty good job explaining it) then suddenly the while discussion will stop being controversial.</p> <blockquote><font class="QuotedText">If you want to know even more, check out: <a href="http://martinsjava.blogspot.com/2009/03/test-this-is-code-more-code-end-test.html">http://martinsjava.blogspot.com/2009/03/test-this-is-code-more-code-end-test.html</a></font></blockquote> <p>AFAICS this article tells nothing about median selection - it just points out that “median of three” can not save quicksort. Which was kind of expected from the beginning but it's still nice to have formal proof.</p> <blockquote><font class="QuotedText">That's why the Java guys went with merge sort.</font></blockquote> <p>Another fail. Java guys <b>have not</b> “went with merge sort”. They actually use both <a href="http://docs.oracle.com/javase/6/docs/api/java/util/Arrays.html#sort(long[])">tuned quicksort</a> and <a href="http://docs.oracle.com/javase/6/docs/api/java/util/Arrays.html#sort(java.lang.Object[])">merge sort</a>. Apparently they felt it'll be good idea to spice life of a programmer. Need to sort Int[] ? Sure, be my guest - O(N log N) is guaranteed. Managed to switch int[] to save on allocations? Have a nice surprise of O(N²) sort! Otherwise life will be dull, you know...</p> <p>──────────<br /> ¹) Obviously <a href="http://lwn.net/Articles/475290/">here</a> <b>HelloWorld</b> does not talk about quickselect because quickselect does not guarantee O(N) complexity.<br /> ²) Here <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> (100% relevant because it can be used to find median in O(N) time) somehow transformed to <i>median of medians</i> (which is of course entirely different thing).<br /> ³) The same story again: <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> phrase can fit in twitterbrain, <i>linear algorithm for the general case of selecting the <b>k</b>th largest element was published</i> (which is explanation if what the said algorithm <b>does</b>) overflows it. We are Twitter generation! 100 character, 14 words, 1 click? NOOOOOO! That's too long! That's too hard!<br /> ⁴) Here I've tried to simplify algorithm's goal description to fit them in twitterbrain - apparently without any success<br /> ⁵) This particular “find median of medians” algorithm requires at least O(N log N) operations which gives us at least O(N log²N) complexity. Close, but no cigar.</p> Sat, 14 Jan 2012 23:39:52 +0000 Worst case performance and "impossibly unlikely" conditions https://lwn.net/Articles/475520/ https://lwn.net/Articles/475520/ vonbrand <p>Here are a few references on quicksort's worst case in my reference list.</p> <ul> <li>Muser, David, "Introspective Sorting and Selection Algorithms", Software -- Practice and Experience, 27(8), 983-993 (1997), in <a href="http://www.cs.rpi.edu/~musser/gp/introsort.ps">PostScript</a> <li>McIllroy, M. Douglas, "A Killer Adversary for Quicksort", Software -- Practice and Experience, 29(4), 341-344 (1999) </ul> Sat, 14 Jan 2012 23:03:13 +0000 What happened? https://lwn.net/Articles/475511/ https://lwn.net/Articles/475511/ cmccabe Hmm. I didn't expect this to be so controversial.<p> Here is a quick example of median of medians.<br> <pre> #!/usr/bin/python def find_median(list): n = len(list) list2 = sorted(list) if ((n % 2) == 0): return (list2[int(n/2)] + list2[int(n/2) + 1]) / 2 else: return list2[int(n/2)] def median_of_medians(k, list): n = len(list) if (n == k): return find_median(list) m = [] for i in range(0, n, k): m.append(median_of_medians(k, list[i:i+k])) return find_median(m) list = [3, 1, 4, 4, 5, 9, 9, 8, 2] print find_median(list) print median_of_medians(3, list) </pre> You can see that the median of our example list is 4, but the "median of medians" (with k=3) is 5. The median of medians is not the same as the median.<p> However, you can use median-of-medians as the pivot selection heuristic for quickselect. If you do this, you are guaranteed O(n) worst case running time for quickselect.<p> So yes, in theory, you could use quicksort with quickselect as the pivot selection algorithm, and median-of-medians as the pivot selection algorithm for quickselect. And it would all be N log N. However, the constant factors would be horrendous. You'd basically be doing 3x as much work as the guy who just used merge sort. That's why the Java guys went with merge sort.<p> If you want to know even more, check out: <a href="http://martinsjava.blogspot.com/2009/03/test-this-is-code-more-code-end-test.html">http://martinsjava.blogspot.com/2009/03/test-this-is-code-more-code-end-test.html</a>.<p> TL;DR: it's easy to code an in-place quicksort, and it's still useful for some things. However, it can't be made adversary-proof without losing a lot of its real-world performance advantage over other N log N sorts.<p> Sat, 14 Jan 2012 21:45:42 +0000 Denial of service via hash collisions https://lwn.net/Articles/475490/ https://lwn.net/Articles/475490/ liljencrantz This problem is actually not fixable in Java without first updating the standard. The current Java standard mandates that the String hashCode is calculated as: <p> s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1] <p> Of course, one could still make case-by-case fixes for every piece of software that is vulnerable, like what Oracle is currently doing, but the number of libraries that will require such fixes are probably in the thousands. The odds of succeeding seem rather slim. <p> Mandating a specific hash algorithm always struck me as a bad idea, for exactly this type of reason. There is a security bug, and it's impossible to fix in a general way while still following the standard. It seems to me that as computers grow gradually faster, the trade-off of todays rather crappy, but fast hash functions will become increasingly less attractive. In a few years, I believe pretty much all hash functions will use cryptographically safe hash algorithms, just to be safe. Mandating a crappy algorithm in the standard needlessly locks you in the past. Sat, 14 Jan 2012 11:48:42 +0000 What happened? https://lwn.net/Articles/475489/ https://lwn.net/Articles/475489/ khim <p>This is just sad. LWN was a place where you were able to come and discuss things easily. Without trying to do something with mass of twitter generation guys who have eyes connected directly to hands (perhaps via spinal cord but definitely bypassing brain) and/or who reply without thinking (this makes brain pointless: it's there but it does something unrelated to commenting process).</p> <p>You both (<b>cmccabe</b> and <b>nybble41</b>) say that algorithm picks Median of Medians - and this is true. Now the question: how can it do that if that's the end result? If it uses some another algorithm to select true median from medians - then what's the point? Answer (obvious and true): yes, it <b>does</b> select "Median of Medians" (hence the name), but no, <b>this is not the end result</b>. The end result is true median.</p> <p>I honestly don't know what to do with people of "twitter generation". When you are employer you can just fire them and refuse to hire new ones. When you are open site... they tend to clog all discussions and make them hard to follow.</p> <p>So far LWN was place which happily avoided this fate (perhaps because it used subscription model), but <b>cmccabe</b> and <b>nybble41</b> are subscribers thus it's obviously not a panacea.</p> <p>Guys. Please, please, please don't write on LWN if you don't have time to think. Everyone does mistakes (phrase <i>errare humanum est, …</i> is well-known and loved), but somehow people start forgetting the second part of that some phrase (<i>perseverare diabolicum</i>). <b>Be human! PLEASE!!!</b></p> Sat, 14 Jan 2012 11:32:25 +0000 Where this hubris comes from ? https://lwn.net/Articles/475461/ https://lwn.net/Articles/475461/ HelloWorld <div class="FormattedComment"> <font class="QuotedText">&gt; The result of the algorithm is, as the name implies, the median of a list of subgroup medians.</font><br> No dude, it's not. <br> </div> Sat, 14 Jan 2012 01:13:50 +0000 Where this hubris comes from ? https://lwn.net/Articles/475451/ https://lwn.net/Articles/475451/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; The first result gives you link to Median of Medians algorithm - and yes, this algorithm can be used to find median in linear time.</font><br> <p> The result of the algorithm is, as the name implies, the median of a list of subgroup medians. It is not the same as the median of the entire list. Given the group size of five used in the article, the result will split the list somewhere between 30% and 70%, rather than at 50% as with a true median. So thus far the GP is entirely correct.<br> <p> Despite that, however, it can be used to guarantee O(n) worst-case performance for the quickselect algorithm, and thus O(n log n) for quicksort.<br> </div> Sat, 14 Jan 2012 00:08:36 +0000 Where this hubris comes from ? https://lwn.net/Articles/475440/ https://lwn.net/Articles/475440/ khim <blockquote><font class="QuotedText">The median of medians, though possibly a useful thing, is not the same thing as the median.</font></blockquote> <p>Un·be·liev·a·ble. <a href="http://lmgtfy.com/?q=median+of+medians"><b>STFW</b></a>! The first result gives you link to <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">Median of Medians algorithm</a> - and yes, this algorithm can be used to find median in linear time.</p> <blockquote><font class="QuotedText">So that's what I was responding to.</font></blockquote> <p>The only thing you response shows is that you just refuse to think. First you were given hint: it's possible to find median in linear time (already enough <a href="http://lmgtfy.com/?q=median+linear+time">to find the answer</a>). The you were given then name of the algorithm. Yet still you continue to persist in your ignorance. Perhaps <a href="http://www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/lecture_notes/lect0125.pdf">direct link</a> will convince you?</p> <p>Can we, please, close this discussion? Yes, it's possible to find median in linear time and yes, you can make quicksort O(N log N) in the worst case using this approach. No, this is usually not feasible because it adds so many manipulations to quicksort that it stops being quick - if you really wonder about worst case scenarios there are different algorithms.</p> Fri, 13 Jan 2012 23:34:42 +0000 Worst case performance and "impossibly unlikely" conditions https://lwn.net/Articles/475435/ https://lwn.net/Articles/475435/ rgmoore <p>That's because they changed the default sort to merge sort, which is stable, starting in 5.8. If you want to test whether their quicksort is stable from run to run over the same data, you'd need to <code>use sort '_quicksort'; no sort 'stable';</code> or the like to force it to use quicksort. Fri, 13 Jan 2012 23:20:02 +0000 Worst case performance and "impossibly unlikely" conditions https://lwn.net/Articles/475419/ https://lwn.net/Articles/475419/ cmccabe <div class="FormattedComment"> The median of medians, though possibly a useful thing, is not the same thing as the median.<br> <p> In your post you said:<br> <font class="QuotedText">&gt; But there are algorithms that can chose the median</font><br> <font class="QuotedText">&gt; (and hence the optimal pivot element) from a list in O(n).</font><br> <p> So that's what I was responding to.<br> </div> Fri, 13 Jan 2012 21:01:48 +0000 Adaptively changing hash? https://lwn.net/Articles/475418/ https://lwn.net/Articles/475418/ jzbiciak <div class="FormattedComment"> Or switch to a balanced binary tree if you reseed too many times.<br> </div> Fri, 13 Jan 2012 20:57:54 +0000 Worst case performance and "impossibly unlikely" conditions https://lwn.net/Articles/475417/ https://lwn.net/Articles/475417/ jzbiciak <P>I think the comment on trees vs. hashes was meant to consider an apples-to-apples comparison at the next level up. For example, if you were to replace Perl's "hash" implementation with an automatically balanced tree representation--eg. AVL tree or RB tree or the like--under the hood, almost nobody would notice, but the time complexity would change from "nearly constant time for non-pathological data" to "logarithmic time for all data." Perl programmers don't change a line of Perl, they just see different performance characteristics. </P><P> Now, the details of other languages container types is a different story. I realize that the comment you replied to mentions C++, and your complaint seems C++ specific. But if you compare a <A rel="nofollow" HREF="http://www.sgi.com/tech/stl/hash_map.html">hash_map</A> to a <A rel="nofollow" HREF="http://www.sgi.com/tech/stl/Map.html">map</A> where your keys are strings, there really isn't a lot of difference between using either. You either provide an equals operator or a less-than operator. Ooooh. </P> Fri, 13 Jan 2012 20:53:24 +0000 Worst case performance and "impossibly unlikely" conditions https://lwn.net/Articles/475416/ https://lwn.net/Articles/475416/ joey <div class="FormattedComment"> Perl's sort does sort the same list the same way twice, despite being unstable. If the pivot is chosen randomly, it must be pseudorandom with a seed of the input list.<br> <p> To test this I used code like this, which exercises the instability of the sort:<br> <p> perl -MData::Dumper -e 'print Dumper sort { $a-&gt;[0] &lt;=&gt; $b-&gt;[1] } ([1,1],[2,3],[1,2],[2,1],[2,4])'<br> <p> perl -le 'sub c { $a-&gt;[0] &lt;=&gt; $b-&gt;[1] } @l=([1,1],[2,3],[1,2],[2,1],[2,4]); print sort(\&amp;c, @l) == sort(&amp;c, @l)'<br> </div> Fri, 13 Jan 2012 20:49:30 +0000