The question Linus raised concerns how much slower can we accept a kernel everyone uses in order to defend against a secondary attack against address space layout randomisation defences (ASLR) against a primary class of attack (buffer or heap overflow) when insufficient bits of entropy are available within the ASLR defence to prevent brute force attacks against this defence in the first place. If 8 bits of genuine entropy are available, this makes a successful buffer overflow exploit work one in 2**8 i.e. 256 times attempted compared to without ASLR, and if 16 bits are available the exploit might work one in 2**16 times given a fully unpredictable ASLR within this set of permutations compared to a system without ASLR.
So the whole point of ASLR isn't to make a buffer or heap attack impossible, but to make it more difficult for an attacker to have code of their choosing executed by a vulnerable application, which is hopefully more likely to crash when an exploit is attempted than give a privilege escalation to the attacker. In a more fully secure system, the attacker wouldn't be able to carry out the primary buffer or heap overflow attack on a vulnerable application through an unvalidated input method, because these primary security bases would have been dealt with first; ASLR is a second line of defence and not the first.
So if the ASLR coding is too performance expensive it will have to be a compile time option and too few people will compile it in to make it of any use to them at all. Better to have enough speed that it doesn't have to be an option and to have enough unpredictability so those trying to defeat it are more likely to use brute force than PRNG state prediction.