Thank you for the reply.
No. There is something else going on, here. If I add "-r 16384" to use 16k records rather than 4k records, the mixed workload test flies:
59985 KB/s Write
60182 KB/s Rewrite
58812 KB/s Mixed Workload
69673 KB/s Write
60372 KB/s Rewrite
60678 KB/s Mixed Workload
The 16k record case is at least 60x faster than the 4k record case. Probably more. I find this to be very odd.
You had already covered the metadata case. I didn't want to replicate that. So the fact that this is not metadata intensive was intentional. This is closer XFS's traditional stomping grounds, but with small system hardware rather than large RAID arrays.
The mixed workload does random 4KB write IO - it even tells you that when it starts.
Check again. It doesn't say that at all. Some confusion may have been caused by my cutting and pasting the wrong command line into my post. You can safely ignore the "-i 4" which I believe *does* do random read/writes.
I should also mention that what prompted me to try 16k records was reviewing my aforementioned server benchmark, where it turns out I had specified 16k records.
Rerunning the benchmark on the 8GB ram server with 3 drive RAID1 and with a 16GB dataset, I see similar "seek death" behavior.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds