I address this point in the presentation - XFS is not trying to replace BTRFS as XFS has a fundamentally different view of data to BTRFS and ZFS. That is, XFS does not "transform" user data (e.g. CRC, encrypt, compress or dedupe) as it passes through the filesystem. All XFS does is provide an extremely large pipe to move the data to/from the storage hardware to/from the application. There is no way we can scale to tens of GB/s data throughput if we have to run CPU based calculations on every piece of data that passes through it.
This is a fundamental limitation of filesystems like BTRFS and ZFS - they assume that there is CPU and memory available to burn for the transformations and that they scale arbitrarily well. If you are limited on your CPU or memory (e.g. your application is using it!) then hardware offload is the only way you can scale such data transforms. At that point, you may as well be using XFS.
i.e. BTRFS will only scale with all it's features enabled up to a certain point, but there are already many people out there with much higher performance requirements than that cross-over point. It's above that cross-over point that I see XFS as "the filesystem of the future". Indeed, I expect the BTRFS system/binary/home filesystems and XFS production data filesystems combination to become a quite common server configuration in the not too distant future....
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds