The 16k record case is at least 60x faster than the 4k record case. Probably more. I find this to be very odd.
You had already covered the metadata case. I didn't want to replicate that. So the fact that this is not metadata intensive was intentional. This is closer XFS's traditional stomping grounds, but with small system hardware rather than large RAID arrays.
The mixed workload does random 4KB write IO - it even tells you that when it starts.
Check again. It doesn't say that at all. Some confusion may have been caused by my cutting and pasting the wrong command line into my post. You can safely ignore the "-i 4" which I believe *does* do random read/writes.
I should also mention that what prompted me to try 16k records was reviewing my aforementioned server benchmark, where it turns out I had specified 16k records.
Rerunning the benchmark on the 8GB ram server with 3 drive RAID1 and with a 16GB dataset, I see similar "seek death" behavior.