LWN: Comments on "Topics in live kernel patching" https://lwn.net/Articles/706327/ This is a special feed containing comments posted to the individual LWN article titled "Topics in live kernel patching". en-us Fri, 03 Oct 2025 20:05:31 +0000 Fri, 03 Oct 2025 20:05:31 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Topics in live kernel patching https://lwn.net/Articles/707710/ https://lwn.net/Articles/707710/ nix <div class="FormattedComment"> <font class="QuotedText">&gt; This sort of change, too, is hard to detect; it would be nice, he said, to have a GCC option to ask it to create a log of the optimizations it has done.</font><br> <p> This is... ah, impractical, because there is really no boundary between 'optimizations' and 'compilation': there are just a long series of transformations that convert the source into the output. Some of these transformations are optional, but you surely don't want to log all of those (and in how much detail? many optimizations change their output depending on which optimizations have run before them...). As a first step, the code added for LTO which records the state of the compiler (incl compilation flags) into the (LTO streamer) output might be a good start: combine that with -frandom-seed= and you can pretty much guarantee reproducible output between runs, which is probably the best you can hope for here.<br> <p> </div> Wed, 30 Nov 2016 17:45:41 +0000 Topics in live kernel patching https://lwn.net/Articles/706937/ https://lwn.net/Articles/706937/ eduard.munteanu <div class="FormattedComment"> There is one alternative which does not seem to be mentioned: asking the one true authoritative source. That is *drumrolls*... the compiler. It is practically the only thing which could possibly have a clear view of code semantics, so the creation of binary patches would best be handled there. It's probably a large project especially for GCC, but it's the right thing to do in the long run.<br> </div> Mon, 21 Nov 2016 07:48:38 +0000 Architectures https://lwn.net/Articles/706920/ https://lwn.net/Articles/706920/ jem <div class="FormattedComment"> This reminded me of the LWN article "Creating a kernel build farm" from Oct 5 (<a href="https://lwn.net/Articles/702375/">https://lwn.net/Articles/702375/</a>). Does anyone have insight into the economics of using this solution instead of "going small"?<br> <p> Packet.net advertises a price of USD 0.5 per hour. You'll get quite a lot of hours for the price of four MiQi boards (16 A17 cores total) plus all the extra necessary gear (switch, power supply, cabling, etc).<br> </div> Sun, 20 Nov 2016 18:48:14 +0000 Architectures https://lwn.net/Articles/706892/ https://lwn.net/Articles/706892/ mmendez <div class="FormattedComment"> Also checkout packet.net's just released Type-2A servers (2x48 core Cavium ThunderX processors) <a href="https://www.packet.net/bare-metal/servers/type-2a">https://www.packet.net/bare-metal/servers/type-2a</a>. Hard to get one right now as they are being scooped up very quickly, but we are going to be bringing in more online.<br> </div> Sat, 19 Nov 2016 21:59:40 +0000 Topics in live kernel patching https://lwn.net/Articles/706888/ https://lwn.net/Articles/706888/ mjthayer <div class="FormattedComment"> <font class="QuotedText">&gt; As the article mentions, the patch build system can safely cope with with "most of the optimization issues". If optimisations applied to a function propagate to callees or elsewhere, then those call sites will be picked up by the diff as well.</font><br> <p> Not quite sure what you are saying there. Do you mean that the current approach is already good enough, or are you questioning whether it is? It sounds rather fragile to me, in the category "it works fine until it doesn't".<br> </div> Sat, 19 Nov 2016 18:06:55 +0000 Architectures https://lwn.net/Articles/706876/ https://lwn.net/Articles/706876/ nyfle <div class="FormattedComment"> In the absence of a reply, I thought I'd add my 2p worth:<br> <p> Scaleway - <a href="https://www.scaleway.com">https://www.scaleway.com</a><br> </div> Sat, 19 Nov 2016 11:43:42 +0000 Topics in live kernel patching https://lwn.net/Articles/706845/ https://lwn.net/Articles/706845/ unixbhaskar <div class="FormattedComment"> Whoa! lots of thing going on . But I believe it should see the daylight sooner than later. <br> </div> Fri, 18 Nov 2016 15:49:29 +0000 Topics in live kernel patching https://lwn.net/Articles/706749/ https://lwn.net/Articles/706749/ intgr <div class="FormattedComment"> <font class="QuotedText">&gt; larger blocks designed to be replaced as wholes, rather than trying to fiddle around gcc optimisations</font><br> <p> As the article mentions, the patch build system can safely cope with with "most of the optimization issues". If optimisations applied to a function propagate to callees or elsewhere, then those call sites will be picked up by the diff as well.<br> <p> Are there actually any optimisations that break with this approach?<br> <p> </div> Thu, 17 Nov 2016 18:32:43 +0000 Architectures https://lwn.net/Articles/706694/ https://lwn.net/Articles/706694/ RCL <div class="FormattedComment"> Where? Serious question since I'd like to give ARM a try for servers, but the support among cloud providers seems to be non-existent. A few small startups get oversubscribed apparently and just put you in the line to be informed "when hardware is available".<br> </div> Thu, 17 Nov 2016 14:11:46 +0000 Architectures https://lwn.net/Articles/706542/ https://lwn.net/Articles/706542/ k8to <div class="FormattedComment"> Perhaps you didn't notice how Oracle has been fairly effectively killing the popularity of the former Sun's hardware? I mean maybe in the space you operate, it's still clinging on, but in the *many* IT spaces I've touched in that timeframe, Solaris &amp; Sparc have become purely legacy. <br> <p> Meanwhile, arm is growing. You can get cloud services instances running on arm these days.<br> </div> Wed, 16 Nov 2016 07:40:59 +0000 Topics in live kernel patching https://lwn.net/Articles/706538/ https://lwn.net/Articles/706538/ fandingo <div class="FormattedComment"> Who is interested in this feature? Seriously. It's an awful can of worms, and we all know that's it's never going to deliver the compatibility or reliability requirements of people who would be interested. <br> <p> <font class="QuotedText">&gt; changing this mechanism to require that all affected modules be loaded before a live patch is applied.</font><br> <p> *Tries not to laugh hysterically.*<br> <p> <font class="QuotedText">&gt; Splitting the patch module, it turns out, could be problematic for any sort of wide-ranging change. CVE-2016-7097 was mentioned as an example; it included a virtual filesystem layer API change that had to be propagated to all filesystems. If it were to be split apart, the result would be a long list of modules that would need to be loaded to apply the patch.</font><br> <p> It's almost like the kernel uses subsystems that share core functionality and build complexity on top of each other. <br> <p> <font class="QuotedText">&gt; There was a lively discussion on whether the rules concerning live patches for modules should be changed, much of it focused on a question asked by Steve Rostedt: if a module isn't present in the kernel, why not just fix it on disk rather than lurking in the kernel, waiting to patch it should it ever be loaded? Jiri Kosina replied that replacing on-disk modules would be hard from a distributor's point of view; it would introduce modules that no longer belong to the kernel package.</font><br> <p> I don't understand this issue at all. The distro's package management tool will invoke the live patching tool, no? Whatever package controls that module should be able to handle this without issue. The only remaining issue is locking out module loading during the update process. Afterwards, the updated module should be loadable from disk if ever needed. <br> <p> <font class="QuotedText">&gt; The alternative would be to split the live-patch module into multiple pieces, each of which applies a patch to a single kernel module. Then, only the pieces that are relevant to any given running system need to be loaded.</font><br> <p> So how would one use `modprobe -r` to remove module X after live patching? Do I remove the patch module first and then the module, or vice versa? How does that affect system stability? Presumably systems using live patching have really long uptimes, so what happens if there's a dozen patch modules on top of the original module?<br> <p> <font class="QuotedText">&gt; He did raise a few larger questions, though. One of those is expanding live patching to user-space code as well; there are evidently, users who are interested in that capability.</font><br> <p> The whole endeavor seems academic without reliable mechanisms for patching the complete software stack. We've already seen the difficulties package managers have of reliably applying system updates in <a href="https://lwn.net/Articles/702629/">https://lwn.net/Articles/702629/</a>.<br> <p> <font class="QuotedText">&gt; He asked: what are the benefits of using live patching rather than performing a live cluster update? If a cluster can be taken down and upgraded one machine at a time, there is no real need for a live-patching infrastructure. We don't all run clusters, but users whose uptime needs make them consider live patching maybe should be using clusters.</font><br> <p> He's pointing to a deeper problem of attesting live patches and more fundamentally the trustworthiness of a mutable kernel. Sure, loadable kernel modules inherently undermine the attestation of a system, but live patching substantially complicates security. <br> <p> <font class="QuotedText">&gt; He asked: what are the benefits of using live patching rather than performing a live cluster update? If a cluster can be taken down and upgraded one machine at a time, there is no real need for a live-patching infrastructure. We don't all run clusters, but users whose uptime needs make them consider live patching maybe should be using clusters.</font><br> <p> Bingo.<br> <p> "Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."<br> </div> Wed, 16 Nov 2016 03:37:42 +0000 Architectures https://lwn.net/Articles/706443/ https://lwn.net/Articles/706443/ fratti <div class="FormattedComment"> I am surprised there is a port in the works for arm64, but not SPARC. I'd have guessed users running SPARC were more interested in livepatching, considering the sorts of applications SPARC is usually found in.<br> </div> Tue, 15 Nov 2016 15:53:24 +0000 Topics in live kernel patching https://lwn.net/Articles/706435/ https://lwn.net/Articles/706435/ mjthayer <div class="FormattedComment"> It would seem to me to make sense to try to modularise the innards of the kernel somewhat into larger blocks designed to be replaced as wholes, rather than trying to fiddle around gcc optimisations. I am sure it would not be easy either of course.<br> <p> Regarding replacing modules on disk, why not just have separate directories for override modules from live patches? That has been possible for a long time, possibly even from the beginning of loadable kernel modules.<br> </div> Tue, 15 Nov 2016 09:05:15 +0000