Posted Nov 7, 2010 19:20 UTC (Sun) by iabervon (subscriber, #722)
Parent article: LPC: Life after X
Since this comes up during the "Lessons from Unix" series, it seems to me like the obvious right solution, now that we've got the hardware for it, is to say that applications open a pseudo-terminal device, which provides input events and has a frame buffer which supports various elaborate operations. But these aren't hardware devices any more than pts/1 is; the input events are only those directed to the application that got the pseudo-terminal, and the frame buffer is only the application's window, and the kernel is providing an abstraction layer and hiding what is on the other side. It may be that there is a userspace program which is compositing the windows onto the hardware frame buffer; then again, the windows might be hardware texture maps, and a userspace program has simply arranged the hardware's scene graph to have these textures get rendered. Or maybe the framebuffer device is proxying everything over a network connection.
But I think the important magic is really having OpenGL as system calls on a file descriptor for a graphical pts device; even though OpenGL in the kernel is a terrible idea, OpenGL over a channel that the kernel is responsible for is a great idea, and even better if the "device" side of the channel can tell the hardware driver to snoop and handle stuff that the hardware supports directly.