|
|
Subscribe / Log in / New account

LPC: Life after X

LPC: Life after X

Posted Nov 7, 2010 14:16 UTC (Sun) by drag (guest, #31333)
In reply to: LPC: Life after X by alankila
Parent article: LPC: Life after X

Possibly. There are other options.

Spice, right now, works on the 'hardware level' in a virtualized environment. You install paravirtualized drivers in Windows or Linux KVM Guest and then they work with the Qemu software to do a very effect remote access. I am guessing that similar things could be possible with just running a special virtual driver on a non-virtualized host. Like how you can have virtual audio cards for networking or bluetooth audio and stuff like that. But just for remote access.

Or you can just do screen capture. There are now, amazingly, very good screen recording software for Linux. You know, for making demos or little youtube videos on how awesome Compiz or whatever. This is quite able to run a 1440x900 15FPS screen capture on very modest hardware without breaking a sweat. Almost no noticable impact. Of course it's using very inefficient, in terms of compression, recording method. A optimized form of WebM or MJPEG or something would probably be good. You can capture individual applications also. What you need then is just some sort of proxy that is capable of sending your keyboard and pointer inputs back to the original application. You should be able to do individual applications that way also.

So there are a few different ways.

I'd really hate to have to have it handled on the toolkit level. I know that it would probably be the cleanest, but I dislike it just because the chances of multiple toolkits being able to do it in a correct and consistant manner is just about nil.


to post comments

LPC: Life after X

Posted Nov 7, 2010 21:29 UTC (Sun) by zander76 (guest, #6889) [Link] (3 responses)

It certainly opens up a lot of options. As he points out in the article you could literally drive networking with anything. Take a step back from the original concept of how x does its network and things get really interesting.

As much as I hate this I will describe it anyway. You could drive an application using with a web service. HTML and json driving the application. Sad part is that with browser caching this could actually be somewhat efficient.

Braking it apart would certainly put a lot more load on the side of distributer when choosing the pieces to put together but hey they make lots of money anyway :) (joke)

Ben

LPC: Life after X

Posted Nov 7, 2010 21:51 UTC (Sun) by dlang (guest, #313) [Link] (2 responses)

one funny thing is that even if you limit yourself to running graphics locally, you should still design your graphics system with similar concerns in mind. it's not efficient to have your main CPU do all the work and pass bitmaps to the video card, besides not taking advantage of the GPU at all, the buss between the CPU and the video card will be the bottleneck. As a result, even for local graphics, the main cpu should be describing what it wants to be displayed and sending that description to the video card rather than just sending the image to be isplayed.

moving the video card to another machine (potentially in another office) changes the speed of this internonnect, and adds increased latency to the connection, but the fundamental problem that you have a low bandwidth link between the CPU and the display is there no matter what.

in terms of bandwidth, the network connection between two machines in the same office is faster than the connection from theCPU to the video card was a few years ago (PCI vs Gig-E)

LPC: Life after X

Posted Nov 8, 2010 2:31 UTC (Mon) by drag (guest, #31333) [Link] (1 responses)

> As a result, even for local graphics, the main cpu should be describing what it wants to be displayed and sending that description to the video card rather than just sending the image to be isplayed.

This, on a side note, is also why often when you're doing benchmarks of graphical toolkits the software rendering of X ends up being faster then the hardware accelerated version it's still advantageousness to use the GPU rendering IF you can do the majority of the rendering on the GPU. Certain toolkit microbenchmarks will often show that CPU rendering is faster in some things then GPU rendering. The GPU just sucks at certain things, but ideally you want to use the GPU 100% of the time to avoid the multiple trips over the PCI Express buss. Each time you have to send texture data across the buss your just burning hundreds of thousands of GPU and CPU cycles just waiting for data to be pushed over.

you can imagine the huge penalty you have if you do, say, text rendering in software, but do the rest on the GPU. Even if the GPU rendering was a dozen times slower, in the real world GPU will still win.

Luckily AMD and Intel are trying to simplify things quite a bit with putting CPUs and GPUs on the same hunk of silicon. No need to flush textures back and forth if your sharing the same memory. :) But even then making proper use of the GPU with software will yield huge improvements in efficiency and performance.

LPC: Life after X

Posted Nov 8, 2010 14:02 UTC (Mon) by nix (subscriber, #2304) [Link]

This is just a re-expression of a problem graphics developers have had for fifteen years or more, ever since video cards started getting dedicated RAM with relatively access latencies from the CPU: you can store stuff in VRAM and it's really fast to manipulate with the GPU and to display, or you can store it in main memory and bash it with the CPU and it's much slower to display, but if you mix the two, you get incredible sloth. Back when the offscreen pixmap cache didn't have a defragmenter (pre X.org 1.6.0) it tended to get too fragmented to put any useful amount of text into... and text scrolling, on a Radeon 9250, then took several *seconds* per screen. That was entirely because of repeated CPU<->VRAM data transfers. They're *slow*.

(Of course, KMS should fix all of this by giving us a proper memory manager for VRAM-plus-main-memory. It doesn't seem to be there yet, though: I still see occasional scrolling slowdowns when the pixmap cache gets too fragmented and the defragmenter hasn't kicked in yet.)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds