|
|
Subscribe / Log in / New account

Optimizing Linux with cheap flash drives

Optimizing Linux with cheap flash drives

Posted Feb 19, 2011 10:40 UTC (Sat) by arnd (subscriber, #8866)
In reply to: Optimizing Linux with cheap flash drives by aleXXX
Parent article: Optimizing Linux with cheap flash drives

The cheapest SSD drives are basically CF cards in a different form factor, or nowadays with a PATA-SATA converter. This will show the exact same behavior as good SD cards.

High-end SSDs come with significant amounts of RAM that can be used to hide most of the nasty effects, or to do something much smarter altogether, such as implementing the entire drive as a log structured file.
The caching unfortunately makes it a lot harder to reverse-engineer the drive through timing attacks, so it's much harder to tell what it really does.

What we know is that the underlying NAND flash technology is very similar, so in the best case, an SSD will be able to hide the problems, but not completely avoid them. If I were to design an SSD controller, I'd do the same things that I'm suggesting in https://wiki.linaro.org/WorkingGroups/KernelConsolidation...


to post comments

Optimizing Linux with cheap flash drives

Posted Feb 19, 2011 18:23 UTC (Sat) by aleXXX (subscriber, #2742) [Link] (1 responses)

You mention read/write speed around 15 MB/s.
How does that fit together with the number between 150 and 350 MB/s which are listed for SSD drives e.g. on alternate.de ?

Actually I can remember that when writing to raw NAND we had also rates somewhere in the 10 to 15 MB/s range.

Alex

Optimizing Linux with cheap flash drives

Posted Feb 19, 2011 20:03 UTC (Sat) by arnd (subscriber, #8866) [Link]

15 MB/s is typical for good SD cards (e.g. Class 6), which are limited by design to 20-25 MB/s anyway (UHS-1 SDHC will be faster, but is still rare today). High-end SSDs can be much faster for a number of reasons:

* SATA is a much faster interface than SD/MMC
* NCQ and write caching allows optimizing the accesses by reordering and batching NAND flash accesses
* Using SLC NAND instead of MLC improves raw accesses
* Using multiple NAND chips in parallel gives a better combined throughput
* Expensive microcontrollers on the drive can use smarter algorithms

All of these cost money, so you don't find them on the low end drives that I analyzed.

Optimizing Linux with cheap flash drives

Posted Feb 20, 2011 9:12 UTC (Sun) by alonz (subscriber, #815) [Link] (2 responses)

Actually, according to information in this AnandTech article, some high-end controllers use even weirder techniques... (They mention specifically real-time compression and real-time deduplication, and there's likely a lot more)

Optimizing Linux with cheap flash drives

Posted Apr 6, 2011 18:36 UTC (Wed) by taggart (subscriber, #13801) [Link] (1 responses)

The compression and deduplication of the Sandforce controller show big benefits over the controllers that don't have them. But those benefits are lost if your data isn't compressable/redundant like if it's encrypted :(

Live CD

Posted Apr 20, 2011 19:12 UTC (Wed) by dmarti (subscriber, #11625) [Link]

What about a live CD that you boot from, type "yes I want to trash my flash drive" and it automatically tries different partition schemes, runs benchmarks, and tells you which one is fast? Don't trust what the drive says, just try it a bunch of possible ways and see what works for real. (I'd pay $14.95 for the iso assuming the underlying code was Free.)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds