Interesting read. It looks like we are finally getting towards the sort of sort of infrastructure I thought is necessary to support high end SSDs. From the presentation I gave at the 2008 Filesystems and IO workshop (cut-n-paste from my slides, so please excuse lack of formatting):
The IOPS Challenge
- Ready for 50,000 IOPS/s per disk?
+ >200,000 ctxsw/s per disk
+ 50,000 intr/s per disk
+ Does not scale to many disks
- Raw IOP capacity per HBA
+ will be a limiting factor
+ driver design will need to focus on IOPS optimisations,
not achieving max bandwidth
- CPU overhead will be high
o Looks more like the network problem
- similar packet rates to gigabit ethernet per disk
many, many more interfaces than a typical network stack
- HBAs with multiple disks will have to handle packet rates
closer to 10Gb ethernet
- similar interrupt scaling tricks will be needed
+ MSI-X directed interrupts
+ one vector per disk behind the HBA?
+ polling rather than interrupt driven
o Will require both hardware and software to evolve
o Not going to happen overnight
o Two orders of magnitude increase in performance is a big
o Optimisations being made for current (cheap) SSDs have a
- random write performance is not a limiting factor at
the high end....
I think this shows the value we have been getting from these workshops - cross pollination of ideas, challenges, techniques, etc across the wider community. We might not see results immediately, but they are eventually appearing...