Would you please stop whipping this long-dead horse ?
Typical flash-memory today is rated for 1M writes. There are internal wear-leveling that
ensures that this is that number of writes to the ENTIRE module. (i.e. it is impossible to
wear out flash faster by writing repeatedly to the same location)
So, even a SMALL 1GB flash-memory requires the writing of a minimum of 1000TB worth of data
before it'll start failing. (or another order of magnitude if it's a 1M flash-module)
To put this in perspective, if you are writing 24x365 to the module at a constant speed of
1MB/s, then you'll wear it out in 31 years. If you write constantly at 10MB/s, you wear it out
in 3 years.
For most uses, this simply isn't a concern. Most computers, even file-servers, don't write
constantly to disk 24x365, and even those that do; the journal is metadata-only by default, so
only writes that changes the filesystem-structure generate load on the journal at ALL.
For those extremely rare servers that DO a gargantuan amount of writes that changes the
filesystem-structure, there's a simple cure: Buy a sligthly LARGER flash-module.
Because of the wear-leveling a flash-module that is 4 times the size will sustain 4 times the
amount (measured in GB) of data written before the cells start reaching their limit.
A server that -MUST- journal more than a PETABYTE of data before the end of its lifespan can
also afford to buy 16GB or 64GB of flash-storage rather than a paultry 1GB.