I don't get one thing about this COW thing. Why it's so good ? That's convenient for snapshots, but
if I don't use snapshots and have huge files and often rewrite their pieces then they'll become
highly fragmented, right ?
Imagine a database data file that is organized as a btree itself. Some database rows will be
updated, and most DBMS do that in-place. So our modern COW filesystem will gradually
fragment that data file. And when DBMS will do, say a range scan in btree it will expect mostly
linear (and fast) disk IO, while in fact it will get random (and dead slow) IO. Maybe I've missed
something important here?