LWN: Comments on "Optimizing stable pages" https://lwn.net/Articles/528031/ This is a special feed containing comments posted to the individual LWN article titled "Optimizing stable pages". en-us Fri, 17 Oct 2025 05:18:56 +0000 Fri, 17 Oct 2025 05:18:56 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Optimizing stable pages https://lwn.net/Articles/528729/ https://lwn.net/Articles/528729/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; CoW using VM tricks is quite often _inferior_ in speed to simple copying.</font><br> <p> COW is slower if the copy actually needs to take place, but faster if the copy is never needed.<br> <p> The question is how likely are you to need to do the copy.<br> </div> Mon, 10 Dec 2012 18:11:44 +0000 Optimizing stable pages https://lwn.net/Articles/528722/ https://lwn.net/Articles/528722/ Cyberax <div class="FormattedComment"> CoW using VM tricks is quite often _inferior_ in speed to simple copying.<br> </div> Mon, 10 Dec 2012 17:27:50 +0000 Optimizing stable pages https://lwn.net/Articles/528704/ https://lwn.net/Articles/528704/ butlerm <div class="FormattedComment"> Given the incredibly severe performance issues the use of the stable pages feature may incur, it seems like the optimal long term solution would be to add copy on write capability for pages that are under writeout. <br> <p> Meaning that when a thread attempts to modify such a page, a duplicate physical page is created, the page structure and PTE are updated accordingly, and ownership of the physical page under writeout is transferred to the fs or device doing the writeout, for reclamation when the writeout completes. That would be far superior in most cases than stalling a thread for an arbitrary period in the meantime, something that is the death of anything resembling real time response. <br> <p> It would also be markedly superior to a copy-always policy by the FS or storage layer concerned. The best of both worlds, essentially.<br> </div> Mon, 10 Dec 2012 16:51:22 +0000 EC2 (local) instance storage https://lwn.net/Articles/528664/ https://lwn.net/Articles/528664/ dlang <div class="FormattedComment"> EBS storage is not simple disks, the size flexibility and performance you can get cannot be supported by providing raw access to drives or drive arrays.<br> <p> As you say, instance local storage is different.<br> </div> Mon, 10 Dec 2012 06:34:42 +0000 EC2 (local) instance storage https://lwn.net/Articles/528663/ https://lwn.net/Articles/528663/ bjencks <div class="FormattedComment"> Just to be clear, there are two different ways of initializing storage: root filesystems are created from a full disk image that specifies every block, so there are no uninitialized blocks to worry about, while non-root instance storage and fresh EBS volumes are created in a blank state, returning zeros for every block.<br> <p> It's well documented that fresh EBS volumes keep track of touched blocks; to get full performance on random writes you need to touch every block first. That implies to me that they don't even allocate the block on the back end until it's written to.<br> <p> Not sure how instance storage initialization works, though.<br> </div> Mon, 10 Dec 2012 06:18:51 +0000 EC2 (local) instance storage https://lwn.net/Articles/528658/ https://lwn.net/Articles/528658/ Cyberax <div class="FormattedComment"> #1 is unlikely because local storage is quite large (4Tb on some nodes). It's not hard to keep track of dirtied blocks, they need it to support snapshots on EBS volumes anyway.<br> </div> Mon, 10 Dec 2012 03:30:40 +0000 EC2 (local) instance storage https://lwn.net/Articles/528657/ https://lwn.net/Articles/528657/ dlang <div class="FormattedComment"> that eliminates #2, but it could be #1 or #3<br> <p> It seems like trying to keep a map of if this block has been written to would be rather expensive to do at the hypervisor level, particularly if you are talking about large drives.<br> <p> Good to know that you should get zeros for uninitialized sectors.<br> </div> Mon, 10 Dec 2012 03:27:41 +0000 EC2 (local) instance storage https://lwn.net/Articles/528656/ https://lwn.net/Articles/528656/ Cyberax <div class="FormattedComment"> They are using #3. The raw device reads on initialized areas return zeroes.<br> </div> Mon, 10 Dec 2012 02:55:11 +0000 EC2 (local) instance storage https://lwn.net/Articles/528652/ https://lwn.net/Articles/528652/ dlang <div class="FormattedComment"> Amazon doesn't put a filesystem on the device, you do.<br> <p> <font class="QuotedText">&gt; I don't know how Amazon (or the hypervisor) prevents access to the raw disk, where unallocated sectors might be found and scavenged even if the filesystem is erased. I guess they do something clever or we would have heard about people reading Zynga's customer database from a stale instance. </font><br> <p> This is exactly what I'm talking about.<br> <p> There are basically three approaches to doing this without the cooperation of the OS running on the instance (which you don't have)<br> <p> 1. the hypervisor zeros out the entire drive before the hardware is considered available again.<br> <p> 2. the hypervisor does encryption of the blocks with a random key for each instance, loose the key and reading the blocks just returns garbage<br> <p> 3. the hypervisor tracks what blocks have been written to and only returns valid data for those blocks.<br> <p> I would guess #1 or #2, and after thinking about it for a while would not bet either way<br> <p> #1 is simple, but it takes a while (unless the drive has direct support for trim and effectively implements #3 in the drive, SSDs may do this)<br> <p> #2 is more expensive, but it allows the system to be re-used faster<br> </div> Mon, 10 Dec 2012 01:07:32 +0000 EC2 (local) instance storage https://lwn.net/Articles/528605/ https://lwn.net/Articles/528605/ Cyberax <div class="FormattedComment"> Uhm. Nope.<br> <p> Amazon doesn't care about your filesystem. AMIs are just dumps of block devices - Amazon simply unpacks them onto a suitable disk. You're free to use any filesystem you want (there might be problems with the bootloader, but they are not insurmountable).<br> <p> You certainly can access the underlying disk device.<br> </div> Sun, 09 Dec 2012 16:01:18 +0000 EC2 (local) instance storage https://lwn.net/Articles/528598/ https://lwn.net/Articles/528598/ man_ls When Amazon EC2 creates a new instance, it allocates a new instance storage with its own filesystem. This process includes formatting the filesystem, and sometimes copying files from the AMI (image file) to the new filesystem. So any previous filesystems are erased. It is here that zeroing unallocated blocks from the previous filesystem comes into place, which is what FALLOC_FL_NO_HIDE_STALE would mess up. <p> I don't know how Amazon (or the hypervisor) prevents access to the raw disk, where unallocated sectors might be found and scavenged even if the filesystem is erased. I guess they do something clever or we would have heard about people reading Zynga's customer database from a stale instance. Sun, 09 Dec 2012 12:04:14 +0000 EC2 (local) instance storage https://lwn.net/Articles/528587/ https://lwn.net/Articles/528587/ dlang <div class="FormattedComment"> Thanks for the correction about the ability to reboot an instance.<br> <p> I don't think this is what FALLOC_FL_NO_HIDE_STALE is about. FALLOC_FL_NO_HIDE_STALE is about not zeroing something that this filesystem has not allocated before, but if you have a disk that has a valid ext4 filesystem on it and plug that disk into another computer, you can just read the filesystem.<br> <p> When you delete a file, the data remains on the disk and root can go access the raw device and read the data that used to be in a file.<br> <p> by default, when a filesystem allocates a block to a new file, it zeros out the data on that block, it's this step that FALLOC_FL_NO_HIDE_STALE lets you skip.<br> <p> the If you really had raw access to the local instance storage without the hypervisor doing something, then you could just mount whatever filesystem the person before you left there. To avoid this Amazon would need to wipe the disks, and since it takes a long time to write a TB or so of data (even on SSDs), I'm guessing that they do something much easier, like doing some sort of encryption to make it so that one instance can't see data written by a prior instance.<br> </div> Sun, 09 Dec 2012 01:50:02 +0000 Optimizing stable pages https://lwn.net/Articles/528585/ https://lwn.net/Articles/528585/ Cyberax <div class="FormattedComment"> We use instance storage on the new SSD-based nodes for very fast PostgreSQL replicated nodes. It indeed survives reboots and oopses.<br> <p> It does not survive stopping the instance through the Amazon EC2 API.<br> </div> Sun, 09 Dec 2012 01:40:09 +0000 EC2 (local) instance storage https://lwn.net/Articles/528583/ https://lwn.net/Articles/528583/ man_ls New instances should not see the contents of uninitialized (by them) disk sectors. That is the point of the recent discussion about <a href="http://lwn.net/Articles/528107/">FALLOC_FL_NO_HIDE_STALE</a>. The kernel will not allow one virtual machine to see the contents of another's disk, or at least that is what I understand. <p> The AWS console has an option to reboot a machine, between "Terminate" and "Stop". You can also do it programmatically using EC2 commands, e.g. if the machine stops responding. Sun, 09 Dec 2012 01:18:06 +0000 EC2 (local) instance storage https://lwn.net/Articles/528581/ https://lwn.net/Articles/528581/ dlang <div class="FormattedComment"> If you don't do any sort of encryption, then when a new instance mounts the drives, it would be able to see whatever was written to the drive by the last instance used it.<br> <p> I would absolutely run / without a journal if / is on media that I won't be able to access after a shutdown (a ramdisk for example)<br> <p> I don't remember seeing anything in the AWS management console that would let you reboot an instance, are you talking about rebooting it from inside the instance? If you can do that you don't need a journal because you can still do a clean shutdown. I don't consider the system to have crashed. I count a crash as being when the system stops without being able to do any cleanup (kernel hang or power off on traditional hardware)<br> </div> Sun, 09 Dec 2012 01:11:26 +0000 EC2 (local) instance storage https://lwn.net/Articles/528579/ https://lwn.net/Articles/528579/ man_ls I am not sure what "dies" means in this context. If the instance is stopped or terminated, then the instance storage is lost. If the instance is rebooted then the same instance storage is kept. Usually you reboot machines which "die" (i.e. crash or oops), so you don't lose instance storage. <p> In short: any new EC2 instance will of course get a new instance storage, but the same instance will get the same instance storage. <p> I understand your last paragraph even less. Why do transparent encryption? Just use regular filesystem options (i.e. don't use <a href="http://lwn.net/Articles/528107/">FALLOC_FL_NO_HIDE_STALE</a>) and you are good. I don't get what a journal has to do with it. <p> Again, keep in mind that many instance types keep their root filesystem on local instance storage. Would you run / without a journal? I would not. Sat, 08 Dec 2012 23:56:46 +0000 Optimizing stable pages https://lwn.net/Articles/528577/ https://lwn.net/Articles/528577/ dlang <div class="FormattedComment"> according to the Instructor in the class I've been in for the last three days, when an ex2 instance dies, nothing that you ever do will give you access to the data that you stored on the ephemeral drive. This is not EBS storage, this is the instance storage.<br> <p> Ignoring what they say and just looking at it from a practical point of view:<br> <p> <p> The odds of any new EC2 instance you fire up starting on the same hardware, and therefor having access to the data are virtually nonexistant.<br> <p> If you can't get access to the drive again, journaling is not going to be any good at all.<br> <p> Add to this the fact that they probably have the hypervisor either do some form of transparent encryption, or make it so that they return all zeros if you read a block you haven't written to yet (to prevent you from seeing someone else's data) and you now have no reason to even try to use a journal on these drives.<br> </div> Sat, 08 Dec 2012 23:28:19 +0000 Optimizing stable pages https://lwn.net/Articles/528569/ https://lwn.net/Articles/528569/ man_ls <blockquote type="cite"> one prime example would be the temporary storage on Amazon Cloud machines. If the system crashes, all the data disappears </blockquote> That is a common misconception, but it is not true. As this <a href="http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/InstanceStorage.html">Amazon doc</a> explains, data in the local instance storage is not lost on a reboot. Quoting that page: <blockquote> However, data on instance store volumes is lost under the following circumstances: <ul> <li> Failure of an underlying drive <li> Stopping an Amazon EBS-backed instance <li> Terminating an instance </ul> </blockquote> So it is not guaranteed but it is not ephemeral either: many instance types actually have their root on an instance store. Amazon teaches you to treat it as ephemeral so that users do not rely on it too much. But using ext2 on it is not a good idea unless it is truly ephemeral. Sat, 08 Dec 2012 22:39:17 +0000 Optimizing stable pages https://lwn.net/Articles/528378/ https://lwn.net/Articles/528378/ cesarb <div class="FormattedComment"> But AFAIK, you can use an ext2 or ext3 filesystem with the ext4 filesystem driver, and it will work fine.<br> <p> IIRC, the default Fedora kernel was configured to always use the ext4 code, even when mounting ext2/ext3 filesystems.<br> </div> Fri, 07 Dec 2012 10:22:09 +0000 Optimizing stable pages https://lwn.net/Articles/528329/ https://lwn.net/Articles/528329/ andresfreund <div class="FormattedComment"> ISTM that the other improvements like extents, hashed directory lookups, delayed allocation and whatever already might offset the journal overhead by a good bit.<br> <p> Also - I haven't tried this though - shouldn't you be able to create an ext4 without a journal while keeping the other ext4 benefits? According to man tune2fs you can even remove the journal with -O^has_journal from an existing FS. The same is probably true for mkfs.ext4.<br> </div> Thu, 06 Dec 2012 23:06:02 +0000 Optimizing stable pages https://lwn.net/Articles/528321/ https://lwn.net/Articles/528321/ Cyberax <div class="FormattedComment"> ext4 can be used without journaling (you need to use tune2fs to set it up). Google added this feature for these use-cases specifically.<br> <p> We've benchmarked it on Amazon EC2 machines. ext4 without journaling is faster than ext2. There are really no more use cases for ext2/3.<br> </div> Thu, 06 Dec 2012 22:38:39 +0000 Optimizing stable pages https://lwn.net/Articles/528282/ https://lwn.net/Articles/528282/ dlang <div class="FormattedComment"> <font class="QuotedText">&gt; If you're truly not worried about data integrity, why not just add all that disk space as swap and use tmpfs?</font><br> <p> swap has horrible data locality, depending on how things get swapped out a single file could end up scattered all over the disk.<br> <p> In addition, you approach puts the file storage directly competing with all processes in terms of memory, you may end up swapping out program data because your file storage 'seems' more important.<br> <p> disk caching has a similar pressure, but the kernel knows that cache data is cache, and that it can therefor be thrown away if needed. tmpfs data isn't in that category.<br> </div> Thu, 06 Dec 2012 21:18:09 +0000 Optimizing stable pages https://lwn.net/Articles/528278/ https://lwn.net/Articles/528278/ bjencks <div class="FormattedComment"> If you're truly not worried about data integrity, why not just add all that disk space as swap and use tmpfs? (I haven't tried this; it could be that it actually works terribly, but it seems like it *should* be the optimal solution)<br> </div> Thu, 06 Dec 2012 20:58:21 +0000 Optimizing stable pages https://lwn.net/Articles/528264/ https://lwn.net/Articles/528264/ dlang <div class="FormattedComment"> There are still times when the best filesystem to use is ext2<br> <p> one prime example would be the temporary storage on Amazon Cloud machines. If the system crashes, all the data disappears, so there's no value in having a journaling filesystem, and in many cases ext3 and ext4 can have significant overhead compared to ext2<br> </div> Thu, 06 Dec 2012 19:56:04 +0000 Optimizing stable pages https://lwn.net/Articles/528203/ https://lwn.net/Articles/528203/ Jonno <div class="FormattedComment"> <font class="QuotedText">&gt; Or by removing fs/ext3/.</font><br> <p> Honestly, removing fs/ext2, fs/ext3 and fs/jbd is probably the only sane thing to do in the long run, as fs/ext4 and fs/jbd2 supports everything they do, and is more well tested (at least on recent kernels, conservatives still using ext3 tend not to run -rc kernels).<br> <p> When the "long run" comes is of course up for debate, but I would say "immediately after Greg's next -longterm announcement", giving conservative users a minimum of two years to prepare, while letting the rest of us go on without the baggage.<br> </div> Thu, 06 Dec 2012 14:22:02 +0000 Optimizing stable pages https://lwn.net/Articles/528171/ https://lwn.net/Articles/528171/ djwong <div class="FormattedComment"> At the moment, I'm trying to figure out if there's a sane way to fix jbd either by backporting what jbd2 does to flush out dirty data prior to committing a transaction, or by finding a way to have jbd set PG_writeback before calling submit_bh() on the file data.<br> <p> Or by removing fs/ext3/. I suspect that would not be popular, however. ;)<br> </div> Thu, 06 Dec 2012 08:03:49 +0000