We had to deploy this into production ~ 3 years ago, when there were fewer alternatives. I agree that it hurt performance a bit, though in fact we still got pretty awesome performance: the secondary server was completely idle apart from the drbd slave, and the network was a single 6' cable between 2 dedicated gigabit cards. With protocol B, I think we found that DRBD's copying data to the secondary server's RAM took less time than for the primary server to write to its disk (at least for larger datasets).
I'm curious as to why replicating the whole filesystem has to copy much more data than other forms of database replication: I thought that typically the ext4 overhead was quite small?
As you say, for small writes, there are some problems, and there is some case for deciding on a per-transaction-type basis whether that transaction is critical, or slightly less critical. (i.e. if someone puts an axe through the server right now, how much do the last 20 ms of that type of data matter?). Either way, postgresql is an amazing product :-)