|
|
Subscribe / Log in / New account

The irony

The irony

Posted Nov 5, 2010 2:37 UTC (Fri) by martinfick (subscriber, #4455)
Parent article: Shuttleworth: Unity on Wayland

To think that with slow machines and slow networks in the late 1980s, network transparency was acceptable. Strange that with networks and PCs having increased by several orders of magnitude in speed since, suddenly the display is too slow? The math here just doesn't seem right. We can stream video across the internet with youtube, or skype, but a desktop has to run locally? Really, what is wrong with this picture? I am sure that there will be lots of technical excuses to defend this, but it just doesn't seem to add up. Something is seriously wrong with the desktop if it cant' keep up with the web. Add in the twist that the desktop is supposedly going to be replaced by webapps and a webdesktop..., are webapps fast enough to run remotely over the web, but not over X? ...Something stinks in graphics land.


to post comments

The irony

Posted Nov 5, 2010 3:00 UTC (Fri) by bjacob (guest, #58566) [Link] (23 responses)

I'll tell you what the difference is:

* in the network-transparent X desktop, the graphics are (at least partially) done *server-side*. So we're killing ourselves doing roundtrips between the client and the server (so GUI snappiness is hurt by network latency), and we don't scale as we tax the poor server too much.

* in modern web apps, the graphics are done on the *client side* (in the browser, in JS). No round trips, and newer web standards (canvas! WebGL!) allow web apps to do client-side the same graphics operations that a local application could do, with WebGL even giving fairly direct access to the GPU.

The irony

Posted Nov 5, 2010 3:14 UTC (Fri) by martinfick (subscriber, #4455) [Link] (22 responses)

This is a lame excuse, surely X could be improved to do more client side if that were truly the problem. But this argument falls on its face when you consider the fact that the people likely pushing for this change are the people who likely do not even use X in a network transparent way in the first place. This means that they are complaining about the performance of a local X server WITHOUT ANY network latency (but they are using remote X as an excuse)!!! Surely this local model could be enhanced without killing X. Again, as I said in my original post, this does not add up. A web app written in JavaScript making remote web RPC calls is good enough, but local graphics on local X server is not? What could possibly be required for normal desktop usage that even a Gigabit ethernet (and local X) could not handle, but yet 10Mbit could handle CAD remotely in 1987. I am sure that my memory is forgetting how bad it was in 1987, but surely, it can't be getting worse with faster technology, so why drop X now suddenly?

The irony

Posted Nov 5, 2010 3:33 UTC (Fri) by bjacob (guest, #58566) [Link] (9 responses)

My point was that the whole idea of a network transparent protocol like X is now looking obsolete. It is, if you want, trying to do network transparency at a level that now appears like the wrong level (given current hardware, and given how the browser is successful with its different model).

Then, about your point that least the network transparency of X should come for free when the server is local --- no idea, letting others reply here. But what's the point of a network protocol if it's going to be less and less used with remote servers.

The irony

Posted Nov 5, 2010 11:47 UTC (Fri) by sorpigal (guest, #36106) [Link] (8 responses)

I strongly disagree. X does network transparency at the right level, or at least a much better level than any other current system.

The irony

Posted Nov 5, 2010 12:58 UTC (Fri) by ibukanov (subscriber, #3942) [Link] (7 responses)

> X does network transparency at the right level,

Over the network VNC have been working better for me than X especially on high-latency links. That tells that from a practical point of view X alone does not provide the right answer.

The irony

Posted Nov 5, 2010 15:23 UTC (Fri) by drag (guest, #31333) [Link]

* Citrix ICA
* Microsoft RDP
* Redhat Spice

VNC is not the only game in town, of course. X Windows networking is, indeed, very very cool. But it's been a very long time since it had any sort of monopoly over remote applications.

Windows users have been enjoying Windows-apps-over-internet for many many years now.

Does anybody have a good how many people use 'Go to My PC'? It's a huge number and they all do it over the internet and it works far better, far easier, then X Windows does.

The irony

Posted Nov 5, 2010 15:54 UTC (Fri) by deepfire (guest, #26138) [Link] (5 responses)

This is all based on the fact that the X implementation (the Xorg stack) we use daily isn't particularly efficient.

As I've already said below, if you want an apples to apples comparison see Nomachine's NX. As I said, I use it daily, and my experience is extremely positive.

And yes, it's open source.

The irony

Posted Nov 6, 2010 22:17 UTC (Sat) by ceswiedler (guest, #24638) [Link] (4 responses)

Slight correction: there's an open, but somewhat old, version of NX. Recent versions maintained by Nomachine are free-as-in-beer for noncommercial use.

NX is excellent and I highly recommend it for remote X access, even on a local network since it provides session restoration and "just works". From what I understand, it compresses extremely well due to the nature of the X protocol, since it can see when things actually need to be sent to the client. A VNC or RDP server by comparison only has the final rendered product.

The irony

Posted Nov 7, 2010 11:24 UTC (Sun) by deepfire (guest, #26138) [Link] (3 responses)

No, you are wrong, see http://www.nomachine.com/sources.php

The sources for the core transport libraries are all there.

The missing stuff is the end-user application code, which they make money from.

The irony

Posted Nov 7, 2010 20:41 UTC (Sun) by dtlin (subscriber, #36537) [Link] (2 responses)

http://www.nomachine.com/redesigned-core.php

The new core of NX 4.0 is made up of a set of libraries written from the ground up to ensure portability, flexibility and state-of-the art performance. NX 4.0 core libraries will not be made open source. Although NX 3.x core compression technology and earlier versions will remain GPL, NoMachine engineers will not be developing the code further.

The irony

Posted Nov 7, 2010 21:27 UTC (Sun) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

The irony

Posted Nov 8, 2010 17:12 UTC (Mon) by dtlin (subscriber, #36537) [Link]

neatx is a wrapper for the 3.x NX core libraries, much like NoMachine's nxserver.

It does not support the NX 4.0 progress, and never will because there's nobody working on it anymore and the libraries are not open.

The irony

Posted Nov 5, 2010 3:42 UTC (Fri) by davide.del.vento (guest, #59196) [Link] (8 responses)

I partially agree. Probably people trying to "kill" X don't use and don't need (thus don't care) about network transparency like you and I do.
Nevertheless, running remote X apps on the network, can be painful. I speak because we do that. One day we had a dozen laptops, connecting wireless to a single router, running interactive X sessions on a system in the basement. It sucked.
I'm sure we could improve the X protocol and make it much faster, but I am not sure we can improve it enough (e.g. if a user click a button, that info must go to the server to decide what that button does, right? and latency is there to kill us...)
Now completely killing network transparency doesn't solve the problem either, or does it? Besides that installing some exotic "hi performance" stuff in our "basement machine" would be almost impossible.... Or is wayland transparent on the "basement machine" and requires only stuff on the laptops?

The irony

Posted Nov 5, 2010 4:17 UTC (Fri) by dlang (guest, #313) [Link] (2 responses)

the time it takes to make the network hop to the server over a local gig-E network is so small that it just doesn't matter. The message will travel faster than your system will time-slice with a jiffies setting ao 100Hz

over high-latency links X performs poorly because it serializes everything and so you have a huge number of round trips, but since these are very standard messages that have the same answer for all applications, most of this data can be cached and replied to locally, eliminating the network latency. there's still the message passing and parsing latency, and most of these messages could be combined to save that, but still keep the network transparency in place.

The irony

Posted Nov 5, 2010 13:30 UTC (Fri) by rgoates (guest, #3280) [Link] (1 responses)

The last time I checked (which has been several years), an interactive X session sends a network packet for each and every keystroke. A horrible waste of bandwidth, but I'm not sure how you can improve that without affecting interactivity (or significantly changing the client-server model X is built around).

The irony

Posted Nov 6, 2010 3:42 UTC (Sat) by mfedyk (guest, #55303) [Link]

just like..... ssh.

please explain why nx hasn't become the wire protocol for x...

The irony

Posted Nov 5, 2010 7:32 UTC (Fri) by kevinm (guest, #69913) [Link]

if a user click a button, that info must go to the server to decide what that button does, right? and latency is there to kill us...

Not necessarily. We can solve the problem in the same way that web applications do: provide a lightweight VM on the UI side, and allow the application to push small chunks of bytecode down to the UI to tell it how to respond to things like button-pushes.

The irony

Posted Nov 5, 2010 16:03 UTC (Fri) by deepfire (guest, #26138) [Link] (3 responses)

Please, forgive me for the third reference to NX in this thread, but I'm really, really surprised to see people discuss X issues and not factoring it in.

So, there are these Nomachine people from Italy, and they seem to have done some pretty good open-source work on optimising the hell out of the X protocol implementation.

At least I use NX daily, my experience is very positive and by now I consider it indispensable.

NX and others

Posted Nov 5, 2010 19:10 UTC (Fri) by boog (subscriber, #30882) [Link] (2 responses)

Has anybody gotten around to trying xpra?

http://code.google.com/p/partiwm/wiki/xpra

NX and others

Posted Nov 10, 2010 9:56 UTC (Wed) by nix (subscriber, #2304) [Link]

Yes, but its keyboard handling is bad enough that I use it only to host xemacses that I'm then going to connect to with gnuclient.

NX and others

Posted Nov 11, 2010 5:10 UTC (Thu) by njs (subscriber, #40338) [Link]

I use it all the time. As nix says, keyboard handling is its weakest point (esp. with non-US layouts), and I've seen some poorly coded apps flip out when the "virtualized" WM doesn't respond as quickly as they're assuming it will, but mostly it works well for me.

Disclaimer: I wrote it, so any show-stopper bugs affecting me *would* be fixed now, wouldn't they ;-).

The irony

Posted Nov 5, 2010 4:41 UTC (Fri) by jwb (guest, #15467) [Link] (2 responses)

A lot of people have tried to make all the drawing happen in the server, by having the client basically upload the view program into the server and having the server run it. One recent one was "Berlin" later renamed "Fresco" which of course failed but it was a nice idea. Their real problem was they relied on a graphics middleware called ggi which was independent of the output target and therefore totally useless. GGI is completely forgotten at this stage, I think.

Another famous one of course was Display PostScript.

The irony

Posted Nov 5, 2010 13:14 UTC (Fri) by vonbrand (subscriber, #4458) [Link]

Another famous one of course was Display PostScript.

Oh, you mean that junk that came with Suns, and made me compile plain X for them on arrival because anything using the display was unbearably slow?

The irony

Posted Nov 12, 2010 2:54 UTC (Fri) by jmorris42 (guest, #2203) [Link]

> Another famous one of course was Display PostScript.

And of course it lives. It started at Next and evolved into Display PDF and is still around in the end product of NextStep now known as Apple's OS X. If they can separate the display rendering from the application I really don't understand why this argument is even taking place. If it is possible the argument should be focused on identifying limitations in X that are preventing a similar performing system and how they might be corrected.

The irony

Posted Nov 5, 2010 13:16 UTC (Fri) by alankila (guest, #47141) [Link] (1 responses)

The reason why streaming video over youtube works is that it's a nice compressed noninteractive bytestream. The fact it gets displayed on screen and is actually a video is pretty much irrelevant from viewpoint of server, which just sends data.

JavaScript RPC model is actually pretty interesting. The web is evolving to allow the programmer the freedom to select a suitable boundary between client and server. Sometimes you just want dump data frames from server as stream (similar to VNC in browser, it's doable), sometimes you run almost the entire application on client and only send data to server so that it can save the work which really occurred almost completely on the client. In fact, the most extreme design has server just feed the original UI files and afterwards no more interaction with server occurs.

Your comment also seems to be missing the point that linux webapps still do run over X. The browser is a X client. It worries me that you seem to jumble everything together here.

The irony

Posted Nov 5, 2010 15:53 UTC (Fri) by lacostej (guest, #2760) [Link]

> it's a nice compressed noninteractive bytestream

and buffered on the client side !


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds