|
|
Subscribe / Log in / New account

Optimizing stable pages

Optimizing stable pages

Posted Dec 6, 2012 14:22 UTC (Thu) by Jonno (subscriber, #49613)
In reply to: Optimizing stable pages by djwong
Parent article: Optimizing stable pages

> Or by removing fs/ext3/.

Honestly, removing fs/ext2, fs/ext3 and fs/jbd is probably the only sane thing to do in the long run, as fs/ext4 and fs/jbd2 supports everything they do, and is more well tested (at least on recent kernels, conservatives still using ext3 tend not to run -rc kernels).

When the "long run" comes is of course up for debate, but I would say "immediately after Greg's next -longterm announcement", giving conservative users a minimum of two years to prepare, while letting the rest of us go on without the baggage.


to post comments

Optimizing stable pages

Posted Dec 6, 2012 19:56 UTC (Thu) by dlang (guest, #313) [Link] (20 responses)

There are still times when the best filesystem to use is ext2

one prime example would be the temporary storage on Amazon Cloud machines. If the system crashes, all the data disappears, so there's no value in having a journaling filesystem, and in many cases ext3 and ext4 can have significant overhead compared to ext2

Optimizing stable pages

Posted Dec 6, 2012 20:58 UTC (Thu) by bjencks (subscriber, #80303) [Link] (1 responses)

If you're truly not worried about data integrity, why not just add all that disk space as swap and use tmpfs? (I haven't tried this; it could be that it actually works terribly, but it seems like it *should* be the optimal solution)

Optimizing stable pages

Posted Dec 6, 2012 21:18 UTC (Thu) by dlang (guest, #313) [Link]

> If you're truly not worried about data integrity, why not just add all that disk space as swap and use tmpfs?

swap has horrible data locality, depending on how things get swapped out a single file could end up scattered all over the disk.

In addition, you approach puts the file storage directly competing with all processes in terms of memory, you may end up swapping out program data because your file storage 'seems' more important.

disk caching has a similar pressure, but the kernel knows that cache data is cache, and that it can therefor be thrown away if needed. tmpfs data isn't in that category.

Optimizing stable pages

Posted Dec 6, 2012 22:38 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

ext4 can be used without journaling (you need to use tune2fs to set it up). Google added this feature for these use-cases specifically.

We've benchmarked it on Amazon EC2 machines. ext4 without journaling is faster than ext2. There are really no more use cases for ext2/3.

Optimizing stable pages

Posted Dec 6, 2012 23:06 UTC (Thu) by andresfreund (subscriber, #69562) [Link]

ISTM that the other improvements like extents, hashed directory lookups, delayed allocation and whatever already might offset the journal overhead by a good bit.

Also - I haven't tried this though - shouldn't you be able to create an ext4 without a journal while keeping the other ext4 benefits? According to man tune2fs you can even remove the journal with -O^has_journal from an existing FS. The same is probably true for mkfs.ext4.

Optimizing stable pages

Posted Dec 7, 2012 10:22 UTC (Fri) by cesarb (subscriber, #6266) [Link]

But AFAIK, you can use an ext2 or ext3 filesystem with the ext4 filesystem driver, and it will work fine.

IIRC, the default Fedora kernel was configured to always use the ext4 code, even when mounting ext2/ext3 filesystems.

Optimizing stable pages

Posted Dec 8, 2012 22:39 UTC (Sat) by man_ls (guest, #15091) [Link] (14 responses)

one prime example would be the temporary storage on Amazon Cloud machines. If the system crashes, all the data disappears
That is a common misconception, but it is not true. As this Amazon doc explains, data in the local instance storage is not lost on a reboot. Quoting that page:
However, data on instance store volumes is lost under the following circumstances:
  • Failure of an underlying drive
  • Stopping an Amazon EBS-backed instance
  • Terminating an instance
So it is not guaranteed but it is not ephemeral either: many instance types actually have their root on an instance store. Amazon teaches you to treat it as ephemeral so that users do not rely on it too much. But using ext2 on it is not a good idea unless it is truly ephemeral.

Optimizing stable pages

Posted Dec 8, 2012 23:28 UTC (Sat) by dlang (guest, #313) [Link] (13 responses)

according to the Instructor in the class I've been in for the last three days, when an ex2 instance dies, nothing that you ever do will give you access to the data that you stored on the ephemeral drive. This is not EBS storage, this is the instance storage.

Ignoring what they say and just looking at it from a practical point of view:

The odds of any new EC2 instance you fire up starting on the same hardware, and therefor having access to the data are virtually nonexistant.

If you can't get access to the drive again, journaling is not going to be any good at all.

Add to this the fact that they probably have the hypervisor either do some form of transparent encryption, or make it so that they return all zeros if you read a block you haven't written to yet (to prevent you from seeing someone else's data) and you now have no reason to even try to use a journal on these drives.

EC2 (local) instance storage

Posted Dec 8, 2012 23:56 UTC (Sat) by man_ls (guest, #15091) [Link] (11 responses)

I am not sure what "dies" means in this context. If the instance is stopped or terminated, then the instance storage is lost. If the instance is rebooted then the same instance storage is kept. Usually you reboot machines which "die" (i.e. crash or oops), so you don't lose instance storage.

In short: any new EC2 instance will of course get a new instance storage, but the same instance will get the same instance storage.

I understand your last paragraph even less. Why do transparent encryption? Just use regular filesystem options (i.e. don't use FALLOC_FL_NO_HIDE_STALE) and you are good. I don't get what a journal has to do with it.

Again, keep in mind that many instance types keep their root filesystem on local instance storage. Would you run / without a journal? I would not.

EC2 (local) instance storage

Posted Dec 9, 2012 1:11 UTC (Sun) by dlang (guest, #313) [Link] (10 responses)

If you don't do any sort of encryption, then when a new instance mounts the drives, it would be able to see whatever was written to the drive by the last instance used it.

I would absolutely run / without a journal if / is on media that I won't be able to access after a shutdown (a ramdisk for example)

I don't remember seeing anything in the AWS management console that would let you reboot an instance, are you talking about rebooting it from inside the instance? If you can do that you don't need a journal because you can still do a clean shutdown. I don't consider the system to have crashed. I count a crash as being when the system stops without being able to do any cleanup (kernel hang or power off on traditional hardware)

EC2 (local) instance storage

Posted Dec 9, 2012 1:18 UTC (Sun) by man_ls (guest, #15091) [Link] (9 responses)

New instances should not see the contents of uninitialized (by them) disk sectors. That is the point of the recent discussion about FALLOC_FL_NO_HIDE_STALE. The kernel will not allow one virtual machine to see the contents of another's disk, or at least that is what I understand.

The AWS console has an option to reboot a machine, between "Terminate" and "Stop". You can also do it programmatically using EC2 commands, e.g. if the machine stops responding.

EC2 (local) instance storage

Posted Dec 9, 2012 1:50 UTC (Sun) by dlang (guest, #313) [Link] (8 responses)

Thanks for the correction about the ability to reboot an instance.

I don't think this is what FALLOC_FL_NO_HIDE_STALE is about. FALLOC_FL_NO_HIDE_STALE is about not zeroing something that this filesystem has not allocated before, but if you have a disk that has a valid ext4 filesystem on it and plug that disk into another computer, you can just read the filesystem.

When you delete a file, the data remains on the disk and root can go access the raw device and read the data that used to be in a file.

by default, when a filesystem allocates a block to a new file, it zeros out the data on that block, it's this step that FALLOC_FL_NO_HIDE_STALE lets you skip.

the If you really had raw access to the local instance storage without the hypervisor doing something, then you could just mount whatever filesystem the person before you left there. To avoid this Amazon would need to wipe the disks, and since it takes a long time to write a TB or so of data (even on SSDs), I'm guessing that they do something much easier, like doing some sort of encryption to make it so that one instance can't see data written by a prior instance.

EC2 (local) instance storage

Posted Dec 9, 2012 12:04 UTC (Sun) by man_ls (guest, #15091) [Link] (7 responses)

When Amazon EC2 creates a new instance, it allocates a new instance storage with its own filesystem. This process includes formatting the filesystem, and sometimes copying files from the AMI (image file) to the new filesystem. So any previous filesystems are erased. It is here that zeroing unallocated blocks from the previous filesystem comes into place, which is what FALLOC_FL_NO_HIDE_STALE would mess up.

I don't know how Amazon (or the hypervisor) prevents access to the raw disk, where unallocated sectors might be found and scavenged even if the filesystem is erased. I guess they do something clever or we would have heard about people reading Zynga's customer database from a stale instance.

EC2 (local) instance storage

Posted Dec 9, 2012 16:01 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

Uhm. Nope.

Amazon doesn't care about your filesystem. AMIs are just dumps of block devices - Amazon simply unpacks them onto a suitable disk. You're free to use any filesystem you want (there might be problems with the bootloader, but they are not insurmountable).

You certainly can access the underlying disk device.

EC2 (local) instance storage

Posted Dec 10, 2012 1:07 UTC (Mon) by dlang (guest, #313) [Link] (5 responses)

Amazon doesn't put a filesystem on the device, you do.

> I don't know how Amazon (or the hypervisor) prevents access to the raw disk, where unallocated sectors might be found and scavenged even if the filesystem is erased. I guess they do something clever or we would have heard about people reading Zynga's customer database from a stale instance.

This is exactly what I'm talking about.

There are basically three approaches to doing this without the cooperation of the OS running on the instance (which you don't have)

1. the hypervisor zeros out the entire drive before the hardware is considered available again.

2. the hypervisor does encryption of the blocks with a random key for each instance, loose the key and reading the blocks just returns garbage

3. the hypervisor tracks what blocks have been written to and only returns valid data for those blocks.

I would guess #1 or #2, and after thinking about it for a while would not bet either way

#1 is simple, but it takes a while (unless the drive has direct support for trim and effectively implements #3 in the drive, SSDs may do this)

#2 is more expensive, but it allows the system to be re-used faster

EC2 (local) instance storage

Posted Dec 10, 2012 2:55 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

They are using #3. The raw device reads on initialized areas return zeroes.

EC2 (local) instance storage

Posted Dec 10, 2012 3:27 UTC (Mon) by dlang (guest, #313) [Link] (3 responses)

that eliminates #2, but it could be #1 or #3

It seems like trying to keep a map of if this block has been written to would be rather expensive to do at the hypervisor level, particularly if you are talking about large drives.

Good to know that you should get zeros for uninitialized sectors.

EC2 (local) instance storage

Posted Dec 10, 2012 3:30 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

#1 is unlikely because local storage is quite large (4Tb on some nodes). It's not hard to keep track of dirtied blocks, they need it to support snapshots on EBS volumes anyway.

EC2 (local) instance storage

Posted Dec 10, 2012 6:18 UTC (Mon) by bjencks (subscriber, #80303) [Link] (1 responses)

Just to be clear, there are two different ways of initializing storage: root filesystems are created from a full disk image that specifies every block, so there are no uninitialized blocks to worry about, while non-root instance storage and fresh EBS volumes are created in a blank state, returning zeros for every block.

It's well documented that fresh EBS volumes keep track of touched blocks; to get full performance on random writes you need to touch every block first. That implies to me that they don't even allocate the block on the back end until it's written to.

Not sure how instance storage initialization works, though.

EC2 (local) instance storage

Posted Dec 10, 2012 6:34 UTC (Mon) by dlang (guest, #313) [Link]

EBS storage is not simple disks, the size flexibility and performance you can get cannot be supported by providing raw access to drives or drive arrays.

As you say, instance local storage is different.

Optimizing stable pages

Posted Dec 9, 2012 1:40 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

We use instance storage on the new SSD-based nodes for very fast PostgreSQL replicated nodes. It indeed survives reboots and oopses.

It does not survive stopping the instance through the Amazon EC2 API.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds