DRBD has a steeper learning curve than you'd expect but, yea, it is pretty fun.
You'd need a filesystem between DRBD and NFS. If you want a primary-primary DRBD (data locally accessible on both nodes), that filesystem must be distributed, which restricts you to GFS, OCFS2, etc, which don't play well with NFS. If you set up a primary-secondary DRBD, you can use ext3 in the middle, which works great with NFS, but then the only benefit DRBD brings is hot failover. And there are MUCH easier ways to set up HA NFS. So... Probably not the best way to go.
Yes, you can stack DRBD: http://www.drbd.org/users-guide/s-three-nodes.html But, unless you're just asynchronously mirroring a volume for hot backup, which works great, stacking tends to be pretty cranky. Definitely don't think, "hey, I can create a 7 layer stack and distribute a single block device to all my satellite offices!" You won't be happy.
The delay is entirely dependent on your network; DRBD itself is pretty light. But remember that block devices tend to use TONS of bandwidth. DRBD includes a userspace proxy that will do compression to make things more tolerable over wan links but it makes things more complex... Only use it if you need to.
The node protocol just specifies how long the primary needs to wait for an ack. It allows you to trade a small risk of data loss for a large improvement in write latencies. The remote can ack immediately (A), keeping less data in flight, but there's a slightly higher risk of data loss. Or the remote can delay the ack until the data is actually in the oxide (C), reducing your potential data loss to pretty much nil, but then writes to the primary will take a lot longer and there will be a lot more data in flight at any one time.
So, with an infinitely fast network, there's basically no downside to going with C. Over a WAN, C would probably be intolerable.
> Is it possible to choose whether a disk is under DRBD control?
What do you mean? You can put pretty much any block device under DRBD control. Right now my stack is SATA > LVM > DRBD > OCFS2. Putting DRBD on top of LVM means that I can grow DRBD+OCFS2 just by attaching more storage anywhere on the system. It's pretty nice. But you could just as easily go SATA > DRBD > LVM > OCFS2 (if you will be snapshotting a lot), or SATA > LVM > DRBD > LVM > etc...