|
|
Subscribe / Log in / New account

An f2fs teardown

An f2fs teardown

Posted Oct 12, 2012 8:21 UTC (Fri) by cmccabe (guest, #60281)
Parent article: An f2fs teardown

Great article!

I'm a little disturbed by the many arbitrary low limits in the filesystem. 16 TB max? Less than 4 TB max for a file? Timestamps only up to 2038?

I mean, sure, good design requires tradeoffs. But I thought the point of this filesystem was that it would become some kind of long-lived standard for how we accessed embedded flash devices, sort of like how FAT32 is now. We would probably not even be talking about replacing FAT32 on flash devices, despite its many inefficiencies and limitations, if it didn't have the 2TB limit.

Or am I misreading this, and it's simply about avoiding the FAT tax and getting some additional performance in the bargain?


to post comments

An f2fs teardown

Posted Oct 12, 2012 16:06 UTC (Fri) by Aissen (subscriber, #59976) [Link] (4 responses)

I'm not sure the primary usage is to prevent the "FAT" tax. Sure, it could be useful on SD cards to replace exFAT. But I think the primary goal is to replace ext4 for eMMCs embedded in smartphones (and tablets, or any other smart device). This limitation could then make (a little bit) sense. With current technology we have 64GB eMMCs, with 128GB in the pipes. With capacity doubling every 2 years, it would take ~15 years to reach the filesystem limit. Let's hope that by then non volatile memory use will be pervasive.

The thing I don't understand, is why work isn't done to make btrfs fit this use case. It already has less write amplification than ext4 or xfs due to it's COW nature (I think Arnd Bergmann did some research on that). It would use the years of experience and higher performance of btrfs (vs a newly developed filesystem). It would also fit the Linux philosophy of running on anything from the tiniest devices to TOP500 computers.

Is it because btrfs as a high CPU overhead ? Consumes lots of disk space ? Or just because every btrfs developer is working on "big data" server-side use cases ?

An f2fs teardown

Posted Oct 14, 2012 18:38 UTC (Sun) by cmccabe (guest, #60281) [Link] (3 responses)

> The thing I don't understand, is why work isn't
> done to make btrfs fit this use case. It already
> has less write amplification than ext4 or xfs
> due to it's COW nature (I think Arnd Bergmann
> did some research on that).

It's not obvious that btrfs is the best choice for SSDs. Ted T'so posted some information on this earlier: http://lwn.net/Articles/470553/

There is currently some work going into btrfs to make it a better match for SSDs. That would probably make an interesting LWN article of its own. Also keep in mind that the type of SSD you see on a desktop is much different than what you see in a mobile phone. The firmware is much fancier and so an optimization for one may be a pessimization for the other.

An f2fs teardown

Posted Oct 15, 2012 8:18 UTC (Mon) by Aissen (subscriber, #59976) [Link]

In the link you point to (very interesting BTW), Ted says that btrfs will be at a disadvantage in "fsync()-happy workload"s. So it varies between workloads.

I didn't use the work "SSD", and that's because (as you said) it might refer to different things. I talked about eMMCs and SD cards, which are the target use case of f2fs, and used in mobile phones.
In some use cases, btrfs might be the best choice, according to Arnd's year old research:
http://www.youtube.com/watch?feature=player_detailpage&... (wasn't able to find the updated slides).

An f2fs teardown

Posted Oct 17, 2012 14:26 UTC (Wed) by arnd (subscriber, #8866) [Link] (1 responses)

I believe btrfs has improved significantly in this area, but its design means that it won't be as good as f2fs on the media that f2fs optimizes for. The issue with b-tree updates that Ted mentions in the link is something that f2fs avoids by having another level of indirection that is not copy-on-write, and btrfs suffers more from fragmentation because it intentionally does not garbage-collect.

On a lot of flash devices, btrfs starts out significantly faster than ext4 after a fresh mkfs, but it's possible that btrfs performance degrades more as the file system fragments with aging. I don't have any data to back that up though.

An f2fs teardown

Posted Nov 16, 2012 15:33 UTC (Fri) by oak (guest, #2786) [Link]

Nobody mentioned compression, but I think BTRFS can use e.g. LZO compression. What's the situation with that?

An f2fs teardown

Posted Oct 16, 2012 18:22 UTC (Tue) by tomstdenis (guest, #86984) [Link] (4 responses)

4TB max for a file is not a problem.

Let's look at your typical use case [e.g. cell phone]. Max download speeds are in the 5-50Mbit/sec range realistically. It'd take 2 days of straight downloading at 50Mbit/sec constantly to fill that up.

If that were an 720p quality video it'd play for 4+ days straight...

An f2fs teardown

Posted Oct 17, 2012 18:13 UTC (Wed) by intgr (subscriber, #39733) [Link] (1 responses)

> 4TB max for a file is not a problem. [...] Max download speeds are in the 5-50Mbit/sec range realistically.

Famous last words.

2GB max for a file wasn't a problem in 1996 when they designed FAT 32, either. It would take over 5 days to fill that over a 33.6 kbaud modem in those days.

Now I can plug an HDMI-capable cellphone into a 1080p TV and stream multi-gigabyte Bluray rips over Wi-Fi. Yet I can't store them on the SD card because someone thought "it would never be a problem".

An f2fs teardown

Posted Oct 28, 2012 17:20 UTC (Sun) by khim (subscriber, #9252) [Link]

This is interesting comment. Note that FAT32 was explicitly designed as stop-gap solution for Windows96 (and then retrofitted into Windows95OSR2 when Windows96 become first Windows97, then Windows98). Long-term solution was supposed to be Windows 2000 (and later Windows XP) and it worked like a charm.

But then FAT32 was used for totally unrelated task (USB-sticks) and this is where it's limitation become problematic... and since Microsoft wants to monopolize this market, too instead of FAT32X we've gotten exFAT... which is, of course, not supported by many-many things because it's implementation is not free because exFAT is heavily patented.

Moral? F2FS limitations are fine for what's it's designed for, but if we'll try to use it for some unrelated tasks... we may be in trouble.

An f2fs teardown

Posted Oct 18, 2012 4:12 UTC (Thu) by cyanit (guest, #86671) [Link] (1 responses)

It is, how about a 5TB disk image/virtual disk on a virtualized server that has a RAID array of 10 512GB SSDs? (the SSDs would only cost around $6000)

Not to mention the fact that files can be sparse.

An f2fs teardown

Posted Oct 18, 2012 14:23 UTC (Thu) by arnd (subscriber, #8866) [Link]

f2fs isn't really optimized for SSDs at all. The largest media today that it actually targets are USB sticks of maybe 128GB that are both slow and expensive. Rather than using a RAID of 40 USB sticks and f2fs, I would always recomment getting a bunch of SSDs and using btrfs on them.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds