A bcache update
A bcache update
Posted May 15, 2012 3:43 UTC (Tue) by koverstreet (✭ supporter ✭, #4296)In reply to: A bcache update by dlang
Parent article: A bcache update
But for that to be useful we'd have to have writeback specifically pick bigger sequential chunks of dirty data and skip smaller ones, and I'm not sure how useful that actually is. Right now it just flushes dirty data in sorted order, which makes things really easy and works quite well in practice - in particular for regular hard drives, even if your dirty data isn't purely sequential you're still minimizing seek distance. And if you let gigabytes of dirty data buffer up (via writeback_percent) - the dirty data's going to be about as sequential as it's gonna get.
But yeah, there's all kinds of interesting tricks we could do.
Posted May 15, 2012 22:34 UTC (Tue)
by intgr (subscriber, #39733)
[Link] (2 responses)
That's certainly an important optimization for mixed random/sequential write workloads whose working set is larger than the SSD. To make best use of both kinds of disks, random writes should persist on the SSD as long as possible, whereas longer sequential writes should be pushed out quickly to make more room for random writes.
Posted May 16, 2012 1:52 UTC (Wed)
by koverstreet (✭ supporter ✭, #4296)
[Link] (1 responses)
If you can show me a workload that'd benefit though - I'm not opposed to the idea, it's just a question of priorities.
Posted May 16, 2012 7:35 UTC (Wed)
by intgr (subscriber, #39733)
[Link]
Oh, I didn't realize that. That should take care of most of it. There's probably still some benefit to sorting writeback by size, but I'm not sure whether it's worth the complexity.
A bcache update
A bcache update
A bcache update