LWN: Comments on "KSM tries again" https://lwn.net/Articles/330589/ This is a special feed containing comments posted to the individual LWN article titled "KSM tries again". en-us Mon, 15 Sep 2025 01:19:57 +0000 Mon, 15 Sep 2025 01:19:57 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Memory deduplication by common disk source https://lwn.net/Articles/363352/ https://lwn.net/Articles/363352/ kabloom <div class="FormattedComment"> There's another situation where this is useful: interpreted languages. Supposing you run a rails app on your server. To take advantage of concurrency you fork the rails app several times. All of the Rails code (parsed already, and byte-compiled if you're on Ruby 1.9) is on Copy-on-write pages. (Ruby Enterprise Edition is intended to keep the garbage collector from writing to those pages.) Now suppose you run several apps but they don't come about by forking. The Rails code (parsed or byte-compiled) can't be traced back to a single disk block (because it's been parsed) but it's probably got the same page layout despite being loaded in different interpreters. Here, KSM should help.<br> </div> Sun, 22 Nov 2009 21:26:38 +0000 We can avoid reading the whole page for each comparison, even in the worst case. https://lwn.net/Articles/362292/ https://lwn.net/Articles/362292/ gmatht <div class="FormattedComment"> Say that they have a binary tree. Each left edge is labelled 0, and each <br> right edge is labelled 1. Whenever we read a bit from the page we follow the <br> edge with the label of the bit. So for example, if we look for the zero page <br> in the tree then we navigate to the leftmost child node.<br> <p> This is clearly very inefficient, as this simple algorithms needs exactly <br> (4096*8) branches to find any page. However we read the page exactly once <br> for all comparisons. And we can optimize it, e.g. if we have a chain of <br> nodes with only single children we can merge them into a single edge. <br> <p> Another approach would be to make use of the fact that if say, we know all <br> child pages of the current node start with "Foo" we don't need to compare <br> the first 3 characters anymore. As such, we'd usually only have to access <br> each cache-line once.<br> </div> Tue, 17 Nov 2009 16:03:45 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/333450/ https://lwn.net/Articles/333450/ nevyn <div class="FormattedComment"> Speed improvement: Don't bother hashing it, just use the first 8 bytes of the page as the first pass value. I'd assume that couldn't be patented (given how it's a trivial optimization of what the memcmp() is doing) ... but who knows.<br> <p> </div> Fri, 15 May 2009 19:43:42 +0000 vmware memory sharing numbers https://lwn.net/Articles/333197/ https://lwn.net/Articles/333197/ robbe <div class="FormattedComment"> For reference: on my vmware ESX 3.5 installation Linux VMs have between 5 <br> and 25 % of their allotted memory marked as shared. The factor is higher <br> for idle machines than for busy ones -- as expected.<br> <p> I've seen 35 % for Windows VMs, as these tend to more aggressively zero <br> out unused pages. These are of course trivially shareable.<br> </div> Thu, 14 May 2009 08:47:55 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/332525/ https://lwn.net/Articles/332525/ nix <div class="FormattedComment"> What I'd do there is hash the first 32/64/128 bytes or thereabouts of a <br> page (one cacheline) and then use that partial hash to eliminate the <br> majority of pages from comparison. (But this obvious solution, which took <br> less than five seconds to think of, may be patented. Bleah.)<br> <p> </div> Fri, 08 May 2009 14:19:49 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/332497/ https://lwn.net/Articles/332497/ efexis If you want to just compare two pages then yes, that may be true. It starts to break down though when you want to compare a page with, say, ten thousand other pages. Comparing a hash with ten thousand other hashes is going to be quicker than comparing a page with ten thousand other pages (your argument of only needing to scan the first x bytes of a page before likelyhood of finding differences holds up also when comparing hashes). If that speed increase comparing hashes outweights the time spent hashing the pages to begin with, then you are losing speed by not hashing. Of course it's not this simple; optimised sorting/indexing algorithms means you don't <i>have</I> to compare every page with every other page to rule out matches (as you also wouldn't have to compare every pair of hashes). For example, what's the effects of reading from all these pages many times as opposed to smaller hashes on the CPU memory cache going to be? <br><br> I think in this case, testing and observation are going to be important, it's near impossible to speculate - with the dynamicness of potentially millions of memory pages spread across similar to disparate virtual systems - what the comparitive results of the two different methods will be. Fri, 08 May 2009 10:50:49 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/332485/ https://lwn.net/Articles/332485/ muwlgr <div class="FormattedComment"> To find identical pages by comparing their hashes, first you have to calculate these hashes, reading full contents of each page into CPU registers (and maybe even evicting all useful CPU cache). Your memory traffic is 100% of your used pages, every time.<br> <p> Contrary, when you compare pages by their contents right from the start, in most cases you could find differing pages quite quickly, by reading only their first few bytes. You have to read two whole pages only to find out that they are identical. And this is done without mathematical overhead of hash calculation, remember. So your memory traffic is 100% only for those pages that happen to be identical. For pages that are different pairwise, you usually have to read only a small part of them to find that. With random contents of the pages, you stop comparing them on the first byte in 255/256 cases. On the fourth byte (32-bit word), in (2^32-1)/2^32 cases. In Linux case, there could exist differing pages with longer common prefixes, but I think this prefix will be shorter than half of page length often enough. So for differing memory pages you most probably don't get 100% of memory traffic anyway. I clearly see why simplified approach of KSM could have its benefits.<br> </div> Fri, 08 May 2009 09:34:32 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331404/ https://lwn.net/Articles/331404/ dion <div class="FormattedComment"> Yeah, I knew that, my point was that the patent system forces extra work to happen in order to work around patents rather than help people to avoid re-inventing things.<br> <p> <p> </div> Sun, 03 May 2009 15:00:41 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331399/ https://lwn.net/Articles/331399/ lemmings <div class="FormattedComment"> <font class="QuotedText">&gt; Without software patents the KSM feature would be stuck doing the same stupid thing as vmware is doing:)</font><br> <p> No, what vmware is doing is not stupid. The security "concern" is not a<br> concern. One simply does a memcmp on pages which have a matching hash.<br> <p> The result of the KSM hash avoidance _will_ be a slower, less scalable<br> identical page scanning mechanism. The stable/unstable lists optimisation<br> is orthogonal to the use of hashes.<br> </div> Sun, 03 May 2009 14:33:17 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331398/ https://lwn.net/Articles/331398/ dlang <div class="FormattedComment"> the grandparent comment was advocating eliminating all patents, not just the ones on software.<br> <p> I don't think that there is anyone who would claim that software patents are currently working correctly.<br> </div> Sun, 03 May 2009 14:26:01 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331396/ https://lwn.net/Articles/331396/ dion <div class="FormattedComment"> Yes, in software tons of things are being invented places where there are no software patents.<br> <p> If there is any great advantage to having software patents then the US and Japan should be leading the world in software innovation.<br> <p> <p> </div> Sun, 03 May 2009 13:27:28 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331393/ https://lwn.net/Articles/331393/ dlang <div class="FormattedComment"> playing devil's advocate here.<br> <p> can you give an example of any country that did not have patent laws producing anywhere near the number of new things that are produced by the US?<br> <p> unfortunantly there is no example to go by to counter this.<br> <p> all that can be done is to look at history and note the number of things that we done by one person or group, and then lost over time to be rediscovered by another group many years later. the patent system is supposed to prevent this by getting these people to disclose 'enough details so that a person ordinarily skilled in the field can duplicate the invention'<br> <p> In my opinion, the big problem is that the bar for getting patents is just too low. it used to be that you had to provide a working model of the invention to be inspected and see if it matched up with the patent. that's not done anymore, and many patents are for things that just don't work. the patent is also only supposed to be granted for things that are 'not obvious to a person skilled in the field', that is not being done (especially in the computer field) and this leads to far too many patents.<br> <p> then you get into the problem that <br> </div> Sun, 03 May 2009 12:41:25 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331384/ https://lwn.net/Articles/331384/ dion <div class="FormattedComment"> Well, personally I have never seen one instance of patents having a positive influence on anything, but I've heard plenty of stories from various industries where they were a huge burden, so I'd rather do away with patents completely.<br> <p> However, "immunity until notified" for published works would be a very fair deal for everyone, especially the society that allows patents, because publishing is exactly what patents are supposed to encourage.<br> <p> <p> </div> Sun, 03 May 2009 11:32:57 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331372/ https://lwn.net/Articles/331372/ dlang <div class="FormattedComment"> that isn't fair to the 'small inventor' battling the 'large company' (the image near and dear to the patent defenders heart) because the little guy may not be able to find out that the company is infringing on the patent due to trade secrets.<br> <p> that being said, I think that a very good case could be made for an explicit exception for the case of open-source software, where any author of open-source software can post a copy of their code to a site (or a list of sites), and companies then have X amount of time to protest the use of any patent in code posted there. give the author of the opensource code immunity for any use prior to any notice being delivered and you will have authors willing to post there.<br> <p> I'm sure that companies would quickly spring up offering the service of investigating this code looking for patent violations (I'm also sure that most of them would not begin to be able to do the job, but that's their, and their clients problem to deal with ;-)<br> </div> Sun, 03 May 2009 07:38:27 +0000 Memory deduplication by common disk source https://lwn.net/Articles/331353/ https://lwn.net/Articles/331353/ giraffedata <p>Your point about KSM working in the here and now is good, but as a question of long term strategy, ncm's is equally good. Maybe KSM should be thought of as a stop-gap. <p> In the systems where the memory duplication is a big problem, the common disk source of the memory pages is a lot closer than Debian's build server. If you have a hundred virtual machines, you probably did the Debian install once and copied the disk image a hundred times locally. <p> If that copy is a naive physical copy, then it's still hard for the hypervisor to know that two memory pages have the same ultimate source. But if you apply the same deduplication strategy to the duplicated disk (and there's plenty of work going on to make that a reality), then you have 100 virtual disk devices all sharing the same physical disk blocks and the hypervisor knows it. So it can easily back 100 virtual memory pages with a single real memory page. Sun, 03 May 2009 02:47:19 +0000 KSM tries again https://lwn.net/Articles/331215/ https://lwn.net/Articles/331215/ tack <div class="FormattedComment"> I just want to point out that VMware does a first-pass comparison using hashes, and, should the hashes match, a second pass byte-by-byte comparison in order to rule out hash collisions.<br> <p> <a href="http://www.vmware.com/pdf/esx3_memory.pdf">http://www.vmware.com/pdf/esx3_memory.pdf</a><br> </div> Fri, 01 May 2009 19:33:41 +0000 See? Patents do help foster innovation! https://lwn.net/Articles/331148/ https://lwn.net/Articles/331148/ dion <div class="FormattedComment"> Without software patents the KSM feature would be stuck doing the same stupid thing as vmware is doing:)<br> <p> I makes me quite happy to see that there was a way around the patent problem this time, although it would be helpful to be able to disable patent encumbered features or enable workarounds only when the software happens to be running in a non-free country.<br> <p> Maybe all of the patent nonsense could be done away with simply by making it the responsibility of the patent owner to spot and warn of infringements and make it impossible to collect any damages until after a reasonable period has passed without action.<br> <p> <p> </div> Fri, 01 May 2009 14:02:41 +0000 KSM tries again https://lwn.net/Articles/330952/ https://lwn.net/Articles/330952/ Darkmere <div class="FormattedComment"> Not really, You think of this from a single-Machine perspective.<br> <p> Think of it as this:<br> host machine running KSM: <br> VM 1:<br> VM 2:<br> VM 3:<br> <p> In all these machines there is bound to be a lot of similar pages of memory allocated. For most (heh?) uses where this would be an improvement you run the same distribution+patchlevel on all VM's with some minor differences. In these cases things like prelink and memory randomization are in effect making the binaries different on disk, however, once you load them into RAM, the working state of for example /sbin/init is bound to have a LOT of pages in common between the four systems (host+3*VM).<br> <p> And that is where you can get some neat memory savings.<br> </div> Thu, 30 Apr 2009 09:45:19 +0000 KSM tries again https://lwn.net/Articles/330948/ https://lwn.net/Articles/330948/ simlo <div class="FormattedComment"> Doesn't this in some ways replace/complement dynamic linking? It seems to be based on the same idea, but moves the whole job into the kernel.<br> <p> Ofcourse, if two applications statically links to a library, there could be many reasons why the resulting pages within the executables are not identical :-(<br> <p> </div> Thu, 30 Apr 2009 09:08:02 +0000 Close but no cigar https://lwn.net/Articles/330935/ https://lwn.net/Articles/330935/ khim <p>Of course all such pages come from common block somewhere on Debian's build server. But there are no easy way to track this relationship at all! If you are thinking about shared libraries and such - this is different kettle of fish and THESE pages are shared already. KSM is designed to work with KVM and Xen where identical pages come from different virtual filesystems and thus from different physical places. May be in the future btrfs will change this, but KSM works here and now...</p> Thu, 30 Apr 2009 06:25:31 +0000 KSM tries again https://lwn.net/Articles/330920/ https://lwn.net/Articles/330920/ ncm <div class="FormattedComment"> I'll bet you can get almost equally good results much more cheaply by tracking pages that map to common disk blocks. I.e., instead of actually scanning the pages, you track where their contents came from. You can get a little better by similarly tracking writable pages that are read() into, marking them read-only until they're touched, if ever. <br> <p> This gets you a long way toward zero-copy I/O (what the BSDs call UVM), although that's said to play poorly with multiple cores.<br> </div> Thu, 30 Apr 2009 04:54:38 +0000