7 or so years ago this was true: you could rewrite data and take advantage of the fact that you could keep setting more bits. YAFFS1 used this as the mechanism for marking blocks superceded. BUt as soon as we got past 512 byte-page flash to 2K page flash re-writing was deprecated by the manufacturers. Even just one rewrite was not encouraged, and the direction of travel was clear so YAFFS2 changed the mechanism for block marking and never rewrote anything. By the time we got to MLC flash there was no rewriting permitted.
So whilst your theory is fine, in practice this hasn't been any use for years.
If I'd been in the audience I'd have been in the 'give us raw access' group. Or at least very direct control over what 'optimisations' the device will do itself. My experience is that linux filesystem writers can do a much better job of getting this right than flash vendors, who really don't have the same optimisation parameters as us at all. Mostly their efforts to do things for us have produced a lot of shitty (slow, unreliable) flash in SD cards.
On the other hand it is clear that some things are better done on the device (checksumming/ECC for a start), and potentially some other stuff (the way modern disk drives by-and-large do a reasonable job internally without messing things up).
But if they just told us the block sizes that would be a good start. The 5 years this hasn't been happening for has been a terrible waste. I bet we have legal dept and patents to thank for that as well as generally not caring about anthing other than 'FAT in cameras'.