LWN: Comments on "Finer-grained kernel address-space layout randomization" https://lwn.net/Articles/812438/ This is a special feed containing comments posted to the individual LWN article titled "Finer-grained kernel address-space layout randomization". en-us Wed, 05 Nov 2025 09:19:46 +0000 Wed, 05 Nov 2025 09:19:46 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Finer-grained kernel address-space layout randomization https://lwn.net/Articles/815243/ https://lwn.net/Articles/815243/ nix <div class="FormattedComment"> <font class="QuotedText">&gt; Now that we finally starting to get rid of those slow BIOSes, 1 second at boot time is really not insignificant.</font><br> <p> Not on non-servers anyway. Servers are still a disaster area. IPMI reports that EFI-only grunty server takes one minute and 17 seconds in EFI before it gets around to booting the OS. All but 18 seconds of that time is spent in DXE; a significant chunk of that is spent doing unbearably slow serial initialization of every USB device on all buses attached to the machine, one... by... one. Because the machine has twenty cores across two processors and all of them were initialized much earlier in boot, but doing anything in parallel is too hard, I guess.<br> <p> As long as unimportant third-party EFI vendors like, uh, Intel (this is an Intel-motherboard box, S2600CTWR) are turning out code like *that*, there's no hope of fast booting, EFI or not.<br> </div> Tue, 17 Mar 2020 21:16:54 +0000 I have locality concerns https://lwn.net/Articles/814520/ https://lwn.net/Articles/814520/ Omnifarious <div class="FormattedComment"> Will this slow things down by placing functions that work together far away from each other? For L1-L3 cache, it's likely not that big a deal as the chunk size is 16 bytes. But if parts of the kernel can be paged out, those are 4k. There may be other locality concerns of which I'm not aware.<br> <p> Has this been thought through? Am I being worried over nothing?<br> </div> Tue, 10 Mar 2020 18:44:30 +0000 Finer-grained kernel address-space layout randomization https://lwn.net/Articles/813614/ https://lwn.net/Articles/813614/ excors <div class="FormattedComment"> That sounds like it wouldn't work when your systems are all VMs booted from the same image, or embedded/mobile devices booted from a signed firmware image.<br> </div> Sun, 01 Mar 2020 23:56:05 +0000 Finer-grained kernel address-space layout randomization https://lwn.net/Articles/813613/ https://lwn.net/Articles/813613/ amarao <div class="FormattedComment"> Why it should be done at boot time? Wouldn't a shuffling at update-grub stage be enough? Each system would have had own map, and no runtime overhead is payed...<br> </div> Sun, 01 Mar 2020 21:23:13 +0000 Finer-grained kernel address-space layout randomization https://lwn.net/Articles/813411/ https://lwn.net/Articles/813411/ rgenoud <div class="FormattedComment"> +1<br> Now that we finally starting to get rid of those slow BIOSes, 1 second at boot time is really not insignificant.<br> <p> And filtering out security feature in embedded products is not always a smart move, but fast response time has a high priority in user's wish list.<br> </div> Thu, 27 Feb 2020 13:27:34 +0000 Finer-grained kernel address-space layout randomization https://lwn.net/Articles/813387/ https://lwn.net/Articles/813387/ pabs <div class="FormattedComment"> Repeating another comment from the NetBSD 9.0 article:<br> <p> <a href="https://blog.netbsd.org/tnf/entry/the_strongest_kaslr_ever">https://blog.netbsd.org/tnf/entry/the_strongest_kaslr_ever</a><br> <p> I note that the post is from 2017.<br> </div> Thu, 27 Feb 2020 07:22:02 +0000 Finer-grained kernel address-space layout randomization https://lwn.net/Articles/813101/ https://lwn.net/Articles/813101/ ras <div class="FormattedComment"> As a user, 1 second is not always "insignificant". It is to desktops and super computers, but Linux runs on more then desktops, super computers systems. In fact I'd wager there are more tiny computers out there in TV's, routers and car entertainment systems than big iron, and a 1 second boot delay is significant if you are waiting for the machine after pressing the power button.<br> <p> But I guess it will be a kernel compile time option, so it won't matter. It's just another feature the embedded guys will have to know they must turn off - or alternative it could be another feature the distro's have to turn on.<br> </div> Sun, 23 Feb 2020 22:02:45 +0000 Finer-grained kernel address-space layout randomization https://lwn.net/Articles/812939/ https://lwn.net/Articles/812939/ excors <div class="FormattedComment"> That makes me think of <a href="https://users.cs.northwestern.edu/~robby/courses/322-2013-spring/mytkowicz-wrong-data.pdf">https://users.cs.northwestern.edu/~robby/courses/322-2013...</a> ("Producing Wrong Data Without Doing Anything Obviously Wrong!"), where perlbench was compiled with -O2 and -O3, and they found that -O3 was anywhere from 0.88 to 1.09 times faster than -O2 depending on the size of environment variables (which get copied into the process's stack and heap and therefore affect data alignment).<br> <p> What it demonstrates is that the typical approach to benchmarking of optimisations - i.e. measure the time taken to run the old version of the code, then measure the time taken to run the new version, in as close to exactly the same environment as possible - is dangerously naive. You can carefully measure that your proposed optimisation gives a 2.0+/-0.1% benefit in your reproducible test environment, which looks like a nice improvement with good data to back it up; but it's quite possible you've actually made the code 2% slower in other similar-but-not-identical environments.<br> <p> Randomising the kernel layout makes it harder to test in exactly the same environment each time, because rebooting may significantly change performance. But if you were relying on testing in exactly the same environment to get reproducible results, your results are probably useless anyway.<br> <p> To minimise that problem, I suppose optimisations should be judged by benchmark suites run over a diverse range of targets (multiple kernel versions, multiple CPU models, multiple Linux distros, intentionally varying important factors like stack alignment, etc). The error bars will likely be very large, and many optimisations (even perfectly good ones) will be rejected as not a statistically significant improvement.<br> <p> Smaller optimisations could still be justified based on an explanation of why they are hypothetically an improvement (e.g. "it reduces data cache misses by 50% in this function, and a system profiler shows roughly 10% of CPU time is spent in this function") plus measurements to back up that hypothesis (e.g. use hardware performance counters to count cache misses in a microbenchmark, over many data sizes and alignments, to verify the 50% reduction) plus a sufficiently expert understanding of CPU microarchitecture to avoid common traps. Ignore macrobenchmark execution time because that's too noisy to give any useful information; you have to rely more on intuition backed up by the limited (but relatively reliable) data from performance counters. That doesn't feel a very satisfactory approach, but it seems better than putting faith in very precise but inaccurate numbers.<br> <p> So if you're concerned about this kernel change's effects on performance measurement, you should try to find better ways to measure performance.<br> </div> Thu, 20 Feb 2020 19:02:29 +0000 Finer-grained kernel address-space layout randomization https://lwn.net/Articles/812905/ https://lwn.net/Articles/812905/ clugstj <div class="FormattedComment"> As a developer, it worries me that each time you boot the kernel, it could have different performance characteristics.<br> </div> Thu, 20 Feb 2020 13:09:01 +0000 Hooray for finer-grained kernel address-space layout randomization https://lwn.net/Articles/812885/ https://lwn.net/Articles/812885/ david.a.wheeler <div class="FormattedComment"> This looks really promising. It looks like it'll make it more challenging for attackers to *exploit* certain kinds of Linux kernel vulnerabilities (or at least reduce their damage). While it's always best to eliminate vulnerabilities, having defenses for remaining vulnerabilities is a great thing, and this looks like a step up.<br> </div> Thu, 20 Feb 2020 03:03:18 +0000