|
|
Subscribe / Log in / New account

A bcache update

A bcache update

Posted May 14, 2012 20:07 UTC (Mon) by blitzkrieg3 (guest, #57873)
Parent article: A bcache update

> Obviously, writeback caching also carries the risk of losing data if the system is struck by a meteorite before the writeback operation is complete.

It isn't clear to me why this is true. SSDs are persistent storage and the data is still in the SSD, so why can't this be persistent? The only way this could be a problem is if the mapping is in memory and not written out to the SSD ever.


to post comments

A bcache update

Posted May 14, 2012 20:10 UTC (Mon) by corbet (editor, #1) [Link] (4 responses)

Yes, exactly...as I tried to explain in that same paragraph. The data does exist on SSD, but it's only useful if the post-meteorite kernel knows what data is there. So the index and such have to be saved to the SSD along with the data.

bcache cache-sets

Posted May 14, 2012 20:49 UTC (Mon) by Lennie (subscriber, #49641) [Link]

Also as briefly mentioned in the older article: bcache has cache-sets, you can assign several SSDs to a backing store.

Eventually bcache is supposed to get support for mirroring the dirty data, so your dirty data will be stored on two SSDs before it is written to your already redundant backing store (like a RAID1 of HDDs).

Any data that is only a cached copy of data already written to the backing store will only be stored ones on one of the SSDs.

When that has been added that should take away most concerns people might have about their data.

A bcache update

Posted May 14, 2012 23:28 UTC (Mon) by russell (guest, #10458) [Link] (2 responses)

could it be that the SSD is a single point of failure in front of a redundant set of disks. So writing to the SSD is probably no better than keeping it in RAM. Power supply failure vs SSD failure.

A bcache update

Posted May 15, 2012 7:56 UTC (Tue) by Tobu (subscriber, #24111) [Link] (1 responses)

I think the SSD will be a single point of failure in writeback mode, because the underlying filesystem would have megabytes of metadata or journal blocks not written in the right order, which is bad corruption. I don't know how SSDs tend to fail; if they fail into a read-only the writes would still be recoverable in this case, as long as bcache can replay from a read-only SSD. Maybe a filesystem that handles SSD caching itself could avoid that risk.

A bcache update

Posted May 15, 2012 9:05 UTC (Tue) by Lennie (subscriber, #49641) [Link]

Did you see my other comment ?

About how bcache will support more than one SSD in the future and how it will save 2 copies of your precious data on different SSDs instead of one:

http://lwn.net/Articles/497126/

A bcache update

Posted May 15, 2012 19:13 UTC (Tue) by dwmw2 (subscriber, #2063) [Link]

"SSDs are persistent storage and the data is still in the SSD, so why can't this be persistent? The only way this could be a problem is if the mapping is in memory and not written out to the SSD ever."
Or if your SSD is like almost all SSDs ever seen, where the internal translation is a "black box" which you can't trust, which is known for its unreliability especially in the face of unexpected power failure, and which you can't debug or diagnose when it goes wrong.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds