I didn't know that Btrfs needed more IO for metadata...I'm not sure of why. Maybe the generic btree and the extra fields needed for the btrfs features make the btree less compact? Or the double indexing of the contents of a directory doubles the size of the metadata?
I understand the concerns about fragmentation of data due to COW - my workstation runs on top of Btrfs and some files are so fragmented that they don't seem like files anymore (.mozilla/firefox/loderdap.default/urlclassifier3.sqlite: 1145 extents found).
But COW data fragmentation isn't just the reverse of a coin - I guess you could say that non-COW filesystem such as XFS also suffer "write fragmentation" (although I don't know how much of a real problem is). From this point of view, using COW or not for data may be mostly a matter of policy. And since Btrfs can disable data COW not just for the entire filesystems, but for individual files/directories/subvolumes, it doesn't really seem a real problem - "if it hurts, don't do it". And the same applies for data checksums and the rest of data transformations.
As for the issue of making medatada writeback scalable, that probably was the most interesting part of your talk. I imagined it as a particular case of softupdates.