Odd iozone results
Odd iozone results
Posted Jan 31, 2012 8:19 UTC (Tue) by sbergman27 (guest, #10767)In reply to: Odd iozone results by raven667
Parent article: XFS: the filesystem of the future?
If random i/o were the major factor in Mixed Workload, why does moving from a 4k block size take me from such a dismal data rate to nearly the peak sequential read/write data rate for the drive? It makes no sense.
That's the significant question. Why does increasing the record size by a factor of 4 result in such a dramatic increase in throughput?
And BTW, I have not yet explicitly congratulated the XFS team on fact that XFS is now nearly as fast as EXT4 on small system hardware. At least in this particular benchmark. So I will offer my congratulations now.
-Steve
Posted Jan 31, 2012 10:59 UTC (Tue)
by dlang (guest, #313)
[Link]
If you do a sequential read test, readahead comes to your rescue, but if you are doing a mixed test, especially where the working set exceeds ram, you can end up with the application stalling waiting for a read and thus introducing additional delays between disk actions.
I don't know exactly what iozone is doing for it's mixed test, but this sort of drastic slowdown is not uncommon.
Also, if you are using a journaling filesystem, there are additional delays as the journal is updated (each write that touches the journal turns in to at least two writes, potentially with an expensive seek between them)
I would suggest running the same test on ext2 (or ext4 with the journal disabled) and see what you get.
Posted Jan 31, 2012 17:55 UTC (Tue)
by jimparis (guest, #38647)
[Link]
From what I can tell by reading the source code, it is a mix where half of the threads are doing random reads, and half of the threads are doing random writes.
Odd iozone results
Odd iozone results