Semi-closing a hole
Semi-closing a hole
Posted Apr 11, 2012 23:57 UTC (Wed) by man_ls (guest, #15091)Parent article: Python 2.6.8, 2.7.3, 3.1.5, and 3.2.3 security release
I know that changing the default hashing algorithm would break some of my code, particularly my unit tests which compare the output of my program with correct test files. No problem, just regenerate the test files and rerun them; a bit obnoxious but easy. However, making the hashing order random would make it nearly impossible to keep static test files and my whole scheme would not so much fly but plummet.
Closing a security hole and making the fix optional (and not enabled by default) seems like a weird decision. It gets us the worst of both worlds: code cannot rely on hashing order since anyone can turn randomization on with a command line parameter. Meanwhile, insecure code can be run insecurely by a careless administrator, a recipe for disasters. That is why non-default behaviors are usually avoided for security fixes: to alleviate the burden on overloaded admins.
I would have very much preferred a default solution that didn't break things, such as limit the number of collisions in a hashing algorithm (to a crazy high number such as 100): after 100 collisions stop accepting elements. All code would be protected by default. I don't see what kind of code might break with this change other than crazy tests for the hashing algorithm itself, and anyway this collision check might be disabled using the means provided here.
It seems that the Python community often picks engineering choices which seem wrong from my point of view; and it is a pity because I really like the language.
