There is a saying that if you put 2 teams to work on a compiler, you get a 2-pass compiler. In this case the architecture gets a redesign: it may be impossible to just drop a new piece in place of old piece, because there is no slot anymore where to drop the new piece.
Wayland will not do network transparency, because the applications apparently must have a direct access to the underlying framebuffer. Therefore the displaying parts must run locally. I personally support this, because X is really pretty inefficient when it comes to actually getting pixels on the display: you can't seem to escape blitting and conversions with X, no matter how much you try.
On the other hand, network transparency would then have to be done on toolkit level if it's really wanted: the app would run remotely and send toolkit-specific protocol over ssh pipe instead of drawing commands or textures. Apparently it would be something like this in practice: "Put a button on coordinates x, y, size dx, dy and text z. Give it a gradient and rounded borders of 2 pixel radius. Here's the colors for the gradient. Send me the event 'foo-clicked' when user pushes it." The local application could probably be some kind of generic "gtkd" or "qtd" that would know how to construct the UI in response to commands like that.