LCA: The X-men speak
Linux.conf.au 2013 in Canberra provided an interesting window into the world of display server development with a pair of talks about the X Window System and one about its planned successor Wayland (a talk which will be the subject of its own article shortly). First, Keith Packard discussed coming improvements to compositing and rendering. He was followed by David Airlie, who talked about recent changes and upcoming new features for the Resize, Rotate and Reflect Extension (RandR), particularly to cope with multiple-GPU laptops. Each talk was entertaining enough in its own right, but they worked even better together as the speakers interjected their own comments into one another's Q&A period (or, from time to time, during the talks themselves).
Capacitance: sworn enemy of the X server
Packard kicked things off by framing recent work on the X server as
a battle against capacitance—more specifically, the excess power
consumption that adds up every time there is an extra copy operation
that could be avoided. Compositing application window contents and
window manager decorations together is the initial capacitance sink,
he said, since historically it required either copying an
application's content from one scanout buffer to another, or
repainting an entirely new buffer then doing a page-flip between the
back (off-screen) buffer and the front (on-screen) buffer. Either
option requires significant memory manipulation, which has steered the
direction of subsequent development, including DRI2, the rendering
infrastructure currently used by the X server.
But DRI2 has its share of other problems needing attention, he
said. For example, the Graphics Execution Manager (GEM) assigns its
own internal names called global GEM handles to the graphics memory it
allocates. These handles are simply integers, not references to
objects (such as file descriptors) that the kernel can
manage. Consequently, the kernel does not know which applications are
using any particular handle; it instead relies on every application to
"remember to forget the name
" of each handle when it is
finished with it. But if one application discards the handle while
another application still thinks it is in use, the second application
will suddenly get whatever random data happens to get placed in the
graphics memory next—presumably by some unrelated application.
GEM handles have other drawbacks, including the fact that they bypass
the normal kernel security mechanisms (in fact, since the handles are
simple integers, they are hypothetically guessable). They are also
specific to GEM, rather than using general kernel infrastructure like
DMA-BUFs.
DRI2 also relies on the X server to allocate all buffers, so applications must first request an allocation, then wait for the X server to return one. The extra round trip is a problem on its own, but server allocation of buffers also breaks resizing windows, since the X server immediately allocates a new, empty back buffer. The application does not find out about the new allocation until it receives and processes the (asynchronous) event message from the server, however, so whatever frame the application was drawing can simply get lost.
The plan is to fix these problems in DRI2's successor, which Packard referred to in slides as "DRI3000" because, he said, it sounded futuristic. This DRI framework will allow clients, not the X server, to allocate buffers, will use DMA-BUF objects instead of global GEM handles, and will incorporate several strategies to reduce the number of copy operations. For example, as long as the client application is allocating its own buffer, it can allocate a little excess space around the edges so that the window manager can draw window decorations around the outside. Since most of the time the window decorations are not animated, they can be reused from one frame to the next. Compositing the window and decoration will thus be faster than in the current model, which copies the application content on every frame just to draw the window decorations around it. Under the new scheme, if the client knows that the application state has not changed, it does not need to trigger a buffer swap.
Moving buffer management out of the X server and into the client has other benefits as well. Since the clients allocate the buffers they use, they can also assign stable names to the buffers (rather than the global GEM handles currently assigned by the server), and they can be smarter about reusing those buffers—such as by marking the freshness of each in the EGL_buffer_age extension. If the X server has just performed a swap, it can report back that the previous front buffer is now idle and available. But if the server has just performed a blit (copying only a small region of updated pixels), it could instead report back that the just-used back buffer is idle instead.
There are also copy operations to be trimmed out in other ways,
such as by aligning windows with GPU memory page boundaries. This
trick is currently only doable on Intel graphics hardware, Packard
said, but results in about a 50% improvement gain from the status quo
to the hypothetical upper limit. He already has much of the
DRI-replacement work functioning ("at least on my
machine
") and is targeting X server 1.15 for its release. The
page-swapping tricks are not as close to completion; a new kernel
ioctl() has been written to allow exchanging chunks of GPU
pages, but the page-alignment code is not yet implemented.
New tricks for new hardware
Airlie's talk focused more on supporting multiple displays and
multiple graphics cards. This was not an issue in the early days, of
course, when the typical system had one graphics card tied to one
display; a single Screen (as defined by the X Protocol) was
sufficient. The next step up was simply to run a separate Screen for
the second graphics card and the second display—although, on the
down side, running two separate screens meant it was not possible to
move windows from one display to the other. Similar was "Zaphod"
mode, a configuration in which one graphics card was used to drive two
displays on two separate Screens. The trick was that Zaphod mode used
two copies of the GPU driver, with one attached to each screen. Here
again, two Screens meant that it was not possible to move windows
between displays.
Things started getting more interesting with Xinerama, however. Xinerama mode introduced a "fake" Screen wrapped around the two real Screens. Although this approach allowed users to move windows between their displays, it did this at the high cost of keeping two copies of every window and pixmap, one for each real Screen. The fake Screen approach had other weaknesses, such as the fact that it maintained a strict mapping to objects on the real, internal Screens—which made hot-plugging (in which the real objects might appear and disappear instantly) impossible.
Thankfully, he said, RandR 1.2 changed this, giving us for the
first time the ability to drive two displays with one graphics card,
using one Screen. " RandR 1.4 solves hot-plugging of displays, work which required
adding support for udev, USB, and other standard kernel interfaces to
the X server, which up until then had been used its own methods for
bus probing and other key features. RandR 1.4's approach to USB
hotplugging worked by having the main GPU render everything, then
having the USB GPU simply copy the buffers out, performing its own
compression or other tricks to display the content on the USB-attached
display. RandR also allowed the X server to offload drawing part of
the screen (such as a game) in one window on the main display.
The future of RandR includes several new functions, such as
"simple" GPU switching. This is the relatively
straightforward-sounding action of switching display rendering from
running on one GPU to the other. Some laptops have a hardware switch
for this function, he said, while others do it in software.
Another new feature is what Airlie calls "Shatter," which splits up
rendering of a single screen between multiple GPUs.
Airlie said he has considered several approaches to getting to
this future, but at the moment Shatter seems to require adding a
layer of abstraction he called an "impedance" layer between X server
objects and GPU objects. The impedance layer tracks protocol objects
and damage events and converts them into GPU objects. " It is sometimes tempting to think of X as a crusty old
relic—and, indeed, both Packard and Airlie poked fun at the
display server system and its quirks more than once. But what both
talks made clear was that even if the core protocol is to be replaced, that
does not reduce window management, compositing, or rendering to a
trivial problem. The constantly-changing landscape of graphics
hardware and the ever increasing expectations of users will certainly
see to that.It was like ... sanity
",
he concluded, giving people what they had long wanted for
multiple-monitor setups (including temporarily connecting an external
projector for presentations). But the sanity did not last, he continued,
because vendors started manufacturing new hardware that made his life
difficult. First, multi-seat/multi-head systems came out of the
woodwork, such as USB-to-HDMI dongles and laptop docking stations.
Second, laptops began appearing with multiple GPUs, which "
come
in every possible way to mess up having two GPUs
". He has one
laptop, for example, which has the display-detection lines connected
to both GPUs ... even though only one of the GPUs could actually
output video to the connected display.
It's
quite messy
", he said, describing the impedance layer as a
combination of the X server's old Composite Wrapper layer and the
Xinerama layer "munged" together. Nevertheless, he said, it is
preferable to the other approach he explored, which would rely on
pushing X protocol objects down to the GPU layer. At the moment, he
said, he has gotten the impedance layer to work, but there are some
practical problems, including the fact that so few people do X
development that there are only one or two people who would be
qualified to review the work. He is likely to take some time off to
try and write a test suite to aid further development.
Marking the spot
Index entries for this article Conference linux.conf.au/2013
Posted Feb 11, 2013 18:54 UTC (Mon)
by ovitters (guest, #27950)
[Link]
Posted Feb 11, 2013 21:04 UTC (Mon)
by quintela (guest, #34198)
[Link] (10 responses)
How difficult would be to implement that?
My setup is like that, and it just happens that I always end needed the different "half's" of two virtual desktops together.
Posted Feb 12, 2013 2:05 UTC (Tue)
by sjj (guest, #2020)
[Link]
I'd love to have this too!
Posted Feb 12, 2013 6:25 UTC (Tue)
by geofft (subscriber, #59789)
[Link]
Posted Feb 12, 2013 7:29 UTC (Tue)
by sce (subscriber, #65433)
[Link]
Posted Feb 12, 2013 19:18 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
Here, I have 3 monitors with 9 workspaces. Each monitor shows a single workspace (though a workspace can't be on two monitors at the same time; maybe Wayland will fix that problem :) ). Unfortunately, the keybindings for manipulating a setup like this can get a little crazy. Comments from my xmonad.hs:
-------------------------------------------------------------------
I don't use all of them (the Fn indexing for screens and the swapping workspaces between screen don't see much use), but it's nice to have the flexibility to manipulate windows and workspaces with the keyboard.
Posted Feb 22, 2013 11:30 UTC (Fri)
by turol (guest, #29306)
[Link] (5 responses)
I have virtual desktops numbered 1-6. With one monitor I can switch the currently active desktop with either the (MATE desktop) selector widget or keyboard shortcuts.
Now assume I had two monitors: left and right. Currently the virtual desktops fill both monitors and switching changes both monitors.
Instead I'd like all the virtual desktops be the size of ONE screen. Then both screens can be switched independently. I could have virtual desktop 1 on the left and 2 or 3 on the right.
Mouse (and keyboard focus) would be on one monitor at a time and when I pressed hotkey to change desktops, the monitor where the mouse currently is would change.
Both monitors would have task bar with all the same widgets and menus but the task bar would contain only the programs on the current virtual desktop.
Fullscreening an app should make it fill one virtual desktop and starting a fullscreen OpenGL program should only fill one monitor.
As far as I know all of this is currently not possible.
Posted Feb 22, 2013 11:45 UTC (Fri)
by tnoo (subscriber, #20427)
[Link] (4 responses)
Posted Feb 22, 2013 14:12 UTC (Fri)
by turol (guest, #29306)
[Link] (3 responses)
Can awesome and xmonad do that?
Posted Feb 22, 2013 14:38 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
Posted Feb 22, 2013 14:50 UTC (Fri)
by turol (guest, #29306)
[Link] (1 responses)
Posted Feb 23, 2013 5:01 UTC (Sat)
by lsl (subscriber, #86508)
[Link]
Posted Feb 11, 2013 21:26 UTC (Mon)
by atai (subscriber, #10977)
[Link] (33 responses)
Posted Feb 11, 2013 22:18 UTC (Mon)
by jospoortvliet (guest, #33164)
[Link] (32 responses)
Posted Feb 11, 2013 22:55 UTC (Mon)
by pjhacnau (subscriber, #4223)
[Link] (31 responses)
Posted Feb 11, 2013 23:40 UTC (Mon)
by neilbrown (subscriber, #359)
[Link] (1 responses)
Wayland isn't "just" a compositor, though that is a lot of what is does.
But you are right in that wayland is just part of a complete solution. It is just a specification (I think).
Posted Feb 12, 2013 17:58 UTC (Tue)
by iabervon (subscriber, #722)
[Link]
Posted Feb 12, 2013 8:22 UTC (Tue)
by rossburton (subscriber, #7254)
[Link] (7 responses)
Posted Feb 13, 2013 0:06 UTC (Wed)
by daglwn (guest, #65432)
[Link] (1 responses)
Posted Feb 14, 2013 8:30 UTC (Thu)
by Serge (guest, #84957)
[Link]
You can see it for yourself. There're wayland livecds. Download http://sourceforge.net/projects/rebeccablackos/ burn it to DVD or manually unpack to a bootable USB stick and try it.
Posted Feb 14, 2013 6:21 UTC (Thu)
by Serge (guest, #84957)
[Link] (4 responses)
DirectFB people never attempted taking the place of X.Org. They could, just never tried.
Posted Feb 14, 2013 13:56 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Feb 14, 2013 19:15 UTC (Thu)
by Serge (guest, #84957)
[Link] (1 responses)
Maybe. :) But it's not that obvious.
> DirectFB is a very lightweight X-server look-alike moved into the kernel.
It does not require kernel patches. It only builds a fusion kernel module for fast IPC. And it's much farther from X-server than Wayland.
> It has no real support for running OpenGL applications
DirectFBGL? You can see some 3D-looking samples in http://ilixi.org/ video.
> and can only do simple compositions.
Weston can also do simple compositions. It's up to the compositor, I guess.
Posted Feb 14, 2013 19:25 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
I don't remember it ever trying to support OpenGL, but I guess that blitting from a texture to framebuffer is not that complex to add if you have a working EGL.
In short, DirectFB is more like a classic framebuffer with several hardware accelerated operations. It's not nearly enough for modern graphics.
Posted Feb 14, 2013 15:14 UTC (Thu)
by renox (guest, #23785)
[Link]
Probably because DirectFB's documentations and introductory materials suck so much that I've never been able to understand how it works (I'm probably not the only one considering that all the discussions about DirectFB I've seen are "it works like this, no it works like that, no, etc").
> DirectFB people never attempted taking the place of X.Org. They could, just never tried.
You don't get to be successful by not trying.
Posted Feb 14, 2013 5:22 UTC (Thu)
by Serge (guest, #84957)
[Link] (17 responses)
Well, Wayland is a protocol. The application implementing the server side of this protocol is a Compositor. Current reference compositor is called Weston.
You can think of wayland compositor as X.Org and Compiz and Emerald and Gnome-Panel and XScreenSaver and a few more things all in a single application.
Wayland protocol is simple. It does not allow you to do most of the things you use daily. I.e. you can't have a dockbar or system tray, can't manage multiple monitors, can't even have a password-protected screensaver. Since none of these are supported by the protocol they'll have to be done by the compositor.
If wayland becomes popular you'll end up choosing between compositor with a nice taskbar, compositor with multiple monitors support and compositor with useful window manager.
Posted Feb 14, 2013 5:30 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
In reality, screensavers are not that a big deal. Making them pluggable is easy.
Posted Feb 14, 2013 6:10 UTC (Thu)
by Serge (guest, #84957)
[Link] (6 responses)
Sure. And that's the point. Wayland is advertised as a simple replacement for X.Org, that is supposed to be smaller, faster and easier to use. But currently it's smaller than X.Org for same reasons as Paint is smaller than Photoshop — it lacks many features. If you add all the missing features into wayland protocol it will grow as large as X11 is now.
Turning those features into plugins won't help much as well. It's just instead of large and complex wayland protocol you'll get a simple protocol together with large and complex weston Plugins API.
What's the point of wayland then? Reinventing X.Org from scratch?
Posted Feb 14, 2013 8:15 UTC (Thu)
by khim (subscriber, #9252)
[Link] (5 responses)
Which can be happily ignored by most applications. The bits which are needed by applications (things like notification pupups or tray icons) are not supported by X anyway and are implemented separately even today.
Posted Feb 14, 2013 9:35 UTC (Thu)
by Serge (guest, #84957)
[Link] (4 responses)
They ARE supported by X. That's the beauty of X. You don't have to patch X.Org to get system tray. You just need a tray host, and you can use regular X11 messages and EWMH to make it work, X11 is flexible enough for that.
Wayland is not that nice. To get system tray in wayland you first need to reinvent your own protocol, which will be used by all the applications to create tray icons. Then you need a patch (or a plugin) for wayland compositor and library. And you'll have to update that patch/plugin with every new compositor/library release. You cannot implement it separately.
Posted Feb 14, 2013 13:58 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Big difference, I know.
Posted Feb 14, 2013 16:51 UTC (Thu)
by Serge (guest, #84957)
[Link] (2 responses)
In X.Org you just start your tray or taskbar host and it works. If you don't like it, you close it, open another one, and it works. No need to patch X.Org, no need to restart other apps. You can even have multiple taskbars, if you wish.
Wayland compositor IS the host. And there're no wayland messages to exchange. To have your tray you must write a patch/plugin for the compositor, and add your new messages as wayland protocol extension, then you will get the tray. And if you don't like it, you have to kill this compositor, with all the apps running there, and build another compositor with another tray patch.
Imagine that you have to patch and rebuild X.Org every time you want to look at Gnome/KDE/XFCE/LXDE/IceWM/FluxBox/OpenBox/etc. That's what you're going to get with wayland. That's the difference.
Posted Feb 14, 2013 16:58 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> Wayland compositor IS the host. And there're no wayland messages to exchange. To have your tray you must write a patch/plugin for the compositor, and add your new messages as wayland protocol extension, then you will get the tray.
> And if you don't like it, you have to kill this compositor, with all the apps running there, and build another compositor with another tray patch.
There is no standard protocol for this right now, but it's just a question of finding a consensus and making a standard, not some fundamental deficiency.
Also, X's tray protocol is woefully incomplete. So KDE and GNOME now use DBUS instead of X for communication.
Posted Feb 14, 2013 17:03 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
What?
I don't think this is how things work. Wayland is just the protocol, your compositor/WM has all the logic in it just the same as it is today. If your WM wants some window to be special such as a taskbar or whatever then that's its prerogative. There will be many different window managers surely, just as there is today, I'm not sure what this talk of patching and rebuilding has to do with anything...
Posted Feb 14, 2013 13:37 UTC (Thu)
by daniels (subscriber, #16193)
[Link] (7 responses)
Weston is totally pluggable, so you can implement any shell, UX or policy you'd like on top of that, whilst reusing all its low-level infrastructure for policy, multiple monitor support, screensavers, input, etc, etc.
Posted Feb 14, 2013 17:06 UTC (Thu)
by Serge (guest, #84957)
[Link] (6 responses)
They're not really supported, just partially exposed. To configure them they must be supported by compositor. Userspace tools can no longer switch/move/rotate monitors. For X.Org you can use xrandr tool anywhere. For Wayland you'll have to use compositor-specific tool, if it exists. And you can't write a presentation tool that will automatically rotate monitor to portrait orientation and back, unless you make it part of the compositor.
> The other bits you've called out have no effect on clients, so they aren't supported in the client-server protocol.
"Client" definition in Wayland is different from X.Org. For X.Org all visible apps on the screen are X.Org clients. And most of them could never be Wayland clients. Under Wayland same apps have to be integrated with compositor.
> Weston is totally pluggable
As long as there's no stable Weston API for dynamically loadable plugins, "weston plugin" won't be much different from "weston patch".
Posted Feb 14, 2013 17:16 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 14, 2013 17:16 UTC (Thu)
by daniels (subscriber, #16193)
[Link] (3 responses)
An X server has to support multiple monitors for X clients to be able to use them, too. Which wasn't the case for a very long time. Also, bear in mind that XRandR 1.2 is very recent - I think it landed in about 2007 or 2008? (And that massive massive chunks of how X works today, including multiple workspaces, etc, are not in any way part of the protocol, and require the compositor to participate in a number of _extremely_ complex protocols. But never mind that.)
> And you can't write a presentation tool that will automatically rotate monitor to portrait orientation and back, unless you make it part of the compositor.
We'd rather have the client tell the compositor about the orientation change, and let the compositor handle it in the most optimal way: either have the display rotated (which is only possible in a vanishingly small amount of hardware, and I do mean vanishing in the most literal way) by the hardware, or have the compositor do the rotation when it does the final blit. X usually emulates this with one extra blit from a shadow buffer, because it has no way to tell the compositor to do it. This has a very real power and performance cost.
Oh, and when your presentation tool crashes, one of your monitors isn't inexplicably left rotated. Also when anything else pops up on that monitor, it isn't inexplicably rotated because the monitor itself isn't actually rotated.
Ditto resolution changes: instead of having games change your resolution and then forget to change it back, with all your desktop icons moving around too, we have the games flag their window as fullscreen, and the compositor works out what to do - be it changing the mode or doing the scaling, or just centring the window if the client asks.
Again, instead of providing direct literal functionality equivalents, we've taken a step back, looked at the problem these interfaces were trying to solve, and tried to come up with a better way.
> "Client" definition in Wayland is different from X.Org. For X.Org all visible apps on the screen are X.Org clients. And most of them could never be Wayland clients. Under Wayland same apps have to be integrated with compositor.
Technically speaking, they (desktop, panels, screensavers) are actually Wayland clients; they're just using special private interfaces the compositor doesn't expose to arbitrary clients. They're not running in-process with the compositor.
> As long as there's no stable Weston API for dynamically loadable plugins, "weston plugin" won't be much different from "weston patch".
You might have more of a point if this actually changed frequently. The X server changes its driver API far more often than Weston does its shell API, and yet no-one complains (except driver authors).
Posted Feb 15, 2013 17:57 UTC (Fri)
by mgedmin (subscriber, #34497)
[Link] (1 responses)
And this is why I can't wait too have Wayland on my laptop!
Posted Feb 15, 2013 18:37 UTC (Fri)
by Serge (guest, #84957)
[Link]
You're free to try: http://sourceforge.net/projects/rebeccablackos/
Posted Feb 21, 2013 9:47 UTC (Thu)
by mmarq (guest, #2332)
[Link]
lol... by the tick pace that the IT clock usually goes, that is as old as my grandpa.
_extremely_complex ? ... can't think exactly why... matter of fact to work really well with modern GPU hardware, not crap or obsolete stuff posing as modern... the ideas that goes my mind is exactly making it _nec_plus_ultra_multi-threading_complex...
There are reasons why some have success while others don't... why some can while others don't... perhaps what the Linux graphics/display is in need, is of code "Ninjas"... first and before of thinking of any new mechanism...
Posted Feb 14, 2013 17:47 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Isn't the compositor a userspace tool? Can't it expose an interface for a command line tool if desired, maybe even have that interface standardized (such as via D-BUS). Doesn't the window manager/compositor have to participate at a fairly low level for R-and-R events anyway? I don't see this as something that's going to be a big change. The window manager/compositor already does mediate pretty much everything, with the existing X system being a giant wart on the side to be worked around.
Posted Feb 21, 2013 9:19 UTC (Thu)
by mmarq (guest, #2332)
[Link]
But what is REALLY REALLY needed is a *multi-threading protocol*... because the future is Multi-GPU or Multi-Core or Multi "compute unit" GPUs, that could be addressed exactly as CPUs are today. And this even for Ultra-Mobile GPUs of the phone/tablet ARM world.
I don't know why devs keep complaining hardware changes a lot... the writing has been on the wall for some(long) time now.
At least *Shatter* tries to address this... which "a priori" makes it already better than Wayland without any further issue.
Crazy gits on windows world, have been having boxes with even 4 GPU cards, while in Linux you got to be content to have one that works well... which is not often, and not with all GPUs... and worst of all, usually things tend to break from one version to the next of the all stack, from drivers to the WS, even if you keep the same card... or more frequently if you change for another, even if it is one of the same family and vendor.
Wayland solves nothing of this.
Only a more deep and direct involvement of the vendors could... but i'm afraid Linux is being set to be the battleground for pesky politics of hardware wars...
...screw the user... who pays the piper chooses the tune... so its my way or the highway... what is important is others not to supplant me... even if it means keeping it a 3th class gaming/desktop/display environment -> The Rational
Posted Feb 21, 2013 9:23 UTC (Thu)
by mmarq (guest, #2332)
[Link] (2 responses)
Posted Feb 21, 2013 10:42 UTC (Thu)
by dgm (subscriber, #49227)
[Link] (1 responses)
Technically X.org 7.2 is a successor of X.org 7.1
Posted Feb 21, 2013 11:22 UTC (Thu)
by mmarq (guest, #2332)
[Link]
Posted Feb 11, 2013 21:49 UTC (Mon)
by marduk (subscriber, #3831)
[Link] (27 responses)
But when we finally have all that working we'll have to switch to Wayland :-|
Posted Feb 11, 2013 21:56 UTC (Mon)
by atai (subscriber, #10977)
[Link] (26 responses)
Posted Feb 11, 2013 22:14 UTC (Mon)
by marduk (subscriber, #3831)
[Link] (19 responses)
Posted Feb 12, 2013 0:47 UTC (Tue)
by samlh (subscriber, #56788)
[Link] (12 responses)
Practically, if you want to have screen-like behavior, you need a shadow framebuffer - or at the least a shadow 'windowbuffer'. This *will* have a performance impact, but it can be made reasonably fast. Indeed, the solutions mentioned in the other comments are quite usable.
Posted Feb 12, 2013 1:01 UTC (Tue)
by samlh (subscriber, #56788)
[Link] (11 responses)
Note, doing this in Wayland is likely to be no harder than it would be in X. Both would need to make the handles driver independent (possibly through a proxy driver), and both would need to track what is running on what GPU. The driver would need work as well, and they share the drivers.
Posted Feb 12, 2013 2:17 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (10 responses)
Though it's definitely not trivial.
Posted Feb 12, 2013 16:52 UTC (Tue)
by daniels (subscriber, #16193)
[Link] (9 responses)
Posted Feb 12, 2013 16:57 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
Also, the shim layer should always compile shaders with both GPU drivers at the same time so that you don't need to wait for too long during switches.
Posted Feb 12, 2013 16:58 UTC (Tue)
by daniels (subscriber, #16193)
[Link] (7 responses)
Posted Feb 12, 2013 17:02 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
Though there certainly should be a mechanism to get access to all GPU capabilities.
Posted Feb 13, 2013 9:48 UTC (Wed)
by renox (guest, #23785)
[Link] (2 responses)
But not the other way round, correct me if I'm wrong but an integrated GPU can do 'page flipping'(and this is an important mechanism for performance) but a discrete GPU cannot do this because its memory is separated from the main memory.
Posted Feb 13, 2013 17:49 UTC (Wed)
by daniels (subscriber, #16193)
[Link] (1 responses)
The new mechanism Keith's talking about for partial page flipping (i.e. doing so on a per-page basis, whereas usually when we say 'page flipping' we mean doing it for the entire buffer - confusing I know) only really works at all on Intel hardware, and even then requires a great deal of co-operation from and hand-holding of all parts of the stack.
Posted Feb 21, 2013 10:27 UTC (Thu)
by mmarq (guest, #2332)
[Link]
then is useless, for anything more than trivial.
Posted Feb 21, 2013 7:07 UTC (Thu)
by elanthis (guest, #6227)
[Link] (2 responses)
Not true at all. It's very common for the discrete GPU to support a higher level of OpenGL and more extensions, which I really want to use. It's not at all uncommon on Windows to have self-written games and graphics apps crash and burn if you don't tell Optimus to use the discrete GPU for that process. If I know my game needs GL 3.3 and some extensions, and only one GPU can provide those, the stack should select that one. Likewise, the app probably knows if it only needs basic rendering or fast as possible rendering (the browser being a weird case, as different web apps might have different needs).
Part of the problem is that some heuristic and app list makes those decisions instead of the app saying during device handle creation that it would prefer the more capable GPU vs the more power efficient GPU. The app knows what it needs out of a GPU, and it should be able to at least hint to the stack about its preferences.
I know the DirectX/XDGI team has some plan here. No idea what Khronos plans to do with EGL, or if it's even thinking about the problem, but like most things Khronos does, it'll probably be some 1980's-style horrible error-prone API with zero debugging tools and be half a decade late to the party when it finally comes out.
Posted Feb 21, 2013 8:59 UTC (Thu)
by khim (subscriber, #9252)
[Link]
It used to be common problem, but is it really common here and now, though? Reviews of "third generation" HD Graphics (Ivy Bridge) usually highlight the fact that it's important milestone not because it's especially fast but because it finally bug-free enough to run most programs without crashing and artifacts. AMD's integrated graphics was always quite good in this regard. Hint - yes, pick - no. Most GPU-using programs are proprietary ones which means that they are have the least amount of information in the whole system (user knows what s/he bought, OS can be upgraded but programs are written once then used for years).
Posted Feb 21, 2013 10:36 UTC (Thu)
by dgm (subscriber, #49227)
[Link]
But it's the user who knows what he wants out of the app. It's the user's preferences for that app what matter.
Posted Feb 12, 2013 10:48 UTC (Tue)
by lindi (subscriber, #53135)
[Link] (5 responses)
Posted Feb 12, 2013 14:20 UTC (Tue)
by marduk (subscriber, #3831)
[Link] (4 responses)
It does for specific use cases. If you are a developer coding on a 3D first-person shooter game in emacs then it's probably fine. If you then want to *play* that game, then I think shared memory is not enough. Moreover, what if you had one of those hybrid laptops and wanted to switch from the Intel GPU to the Nvidia one mid-play (although that may be possible now, but it sounds like maybe not). If/when the server can talk directly to the GPU *and* hot-plug them then that makes it easier do to things like this (easier albeit not easy) than if there is an intermediate server that can at best share memory with the server that owns the hardware.
The question though is what to do when there is no GPU attached and the clients are still running... perhaps you would need a virtual GPU layer or something like that.
Anyway that was my "dream". I realize that there are some solutions out there that kinda do what my dream is, but usually they do so inefficiently or require patches/workarounds for non-trivial use cases.
Posted Feb 12, 2013 16:20 UTC (Tue)
by drag (guest, #31333)
[Link] (3 responses)
Instead of the server doing the rendering all the rendering is handled by the clients. The clients themselves talk to the GPU via whatever API they feel like using and then render their output to a off screen buffer.
Then all the 'display server' has to do at this point, besides handle the grunt work of doing things like handling inputs and modesetting, is flip the client buffer and composite the display.
With a little bit effort then it should be possible to kill the 'display server' and then have another 'display server' be able to grab the client application's output buffer.
Taking that and extending the idea further it should then be possible for you to take the client application's buffer it's rendering to, perform some sort of efficient streaming image compression, and then send that output over the network.
It may need slightly more bandwidth then traditional X, but it would still benefit from accelerated rendering (especially if your image compression can be accelerated) and would be virtually unaffected by latency... which is with X is it's Achilles's heal.
You could then have a single application on your desktop and then seamlessly transfer it to your phone or anything else running on any network. No special effort needed on the side of the application developer and you still get acceleration and all that happy stuff.
All theoretical at this point, of course.
/end wayland troll
Posted Feb 12, 2013 18:58 UTC (Tue)
by dlang (guest, #313)
[Link] (1 responses)
Posted Feb 12, 2013 20:42 UTC (Tue)
by drag (guest, #31333)
[Link]
Is it going to be a big win for you to distribute the rendering of the applications?
It seems to me that the sort of applications were the bulk of the resources is put into the final rendering (video games, video playback, and the like) are exactly the sort of things that X11 is terrible at and is all done by direct rendering when possible.
Posted Feb 21, 2013 11:34 UTC (Thu)
by mmarq (guest, #2332)
[Link]
Posted Feb 11, 2013 23:35 UTC (Mon)
by jonabbey (guest, #2736)
[Link] (5 responses)
Posted Feb 12, 2013 1:20 UTC (Tue)
by smurf (subscriber, #17840)
[Link] (4 responses)
Posted Feb 12, 2013 11:39 UTC (Tue)
by NAR (subscriber, #1313)
[Link]
Posted Feb 12, 2013 23:09 UTC (Tue)
by jonabbey (guest, #2736)
[Link] (2 responses)
I use OpenNX on Linux and Mac, and it has the best performance I have ever seen for remote application usage. What used to take 3 minutes over coffee shop wi-fi goes down to like 5 seconds with NX.
A great pity that it has gone proprietary for the latest version.
Posted Feb 13, 2013 1:26 UTC (Wed)
by dtlin (subscriber, #36537)
[Link]
It absolutely is. nxagent is a fork of Xnest, although I'm not quite sure what version.
Posted Feb 21, 2013 11:06 UTC (Thu)
by mmarq (guest, #2332)
[Link]
NX started as proprietary... yet
So the mechanism (protocol addition) is free... still is... but blindness has a price... why maintain a free version when there is no "support"... i would do the same, improvements only on the proprietary "version"...
And the funny thing is that NX seems to address many of the issues that were discuss about wayland... most could be used not only on the context of remote display, but local also...
http://www.linuxjournal.com/article/8477?page=0,2
* an efficient compression of what normally would be X traffic
* an intelligent mechanism to store and re-use (cache) data that is transfered between server and client
* a drastic reduction in multiple, time-consuming X roundtrips, bringing their total number *close to zero*.
So it seems clear to me its not a copy or derivative of Xnest.. at all..
And i couldn't think of a better addition to the all X protocol than this... but sometimes wonder if Xorg has any say, or only does what its told to do, and that is why it has been "striped" and now positioned to be "finished"... hope the distros could take a serious look at this situation and take over, hire a "Ninja" or two, fresh ideas, specially the desktop oriented ones... their future is in jeopardy.
Posted Feb 11, 2013 23:36 UTC (Mon)
by ostar2 (guest, #88547)
[Link]
Posted Feb 12, 2013 3:47 UTC (Tue)
by wrobbie (guest, #39320)
[Link] (1 responses)
Posted Feb 13, 2013 8:01 UTC (Wed)
by drag (guest, #31333)
[Link]
Posted Feb 12, 2013 7:54 UTC (Tue)
by renox (guest, #23785)
[Link] (37 responses)
Posted Feb 12, 2013 9:04 UTC (Tue)
by alankila (guest, #47141)
[Link] (35 responses)
I can't see a way to usefully constrain random application's behavior such that this couldn't ever be a problem. That's why security-conscious people invented the ctrl-alt-del keystroke combo that can't be caught by applications and which will always present the system log-on prompt. After that people can be instructed to follow a procedure that ensures that login details won't be written to program that merely looks like system's login prompt.
If security relies on user identifying windows and acting based on what they look like, I guess security can't be attained. The pixels are always under attacker's control, one way or other. And I know of no way to sensibly secure, say, policykit's authentication prompt. Anybody can fake that, it's just a window... I have some hope that Weston for instance could make it impossible to make ordinary windows behave quite like security-critical windows. Microsoft chose to train people to look for darkened desktop with a single authorization popup window in middle of it. I've no idea if this is something no other application can fake, or what the point of that is, but it is a tough problem to solve.
Posted Feb 12, 2013 9:36 UTC (Tue)
by renox (guest, #23785)
[Link] (16 responses)
Having the display server setting the tittlebar instead of the application is a start with server side decoration. Provided there is a 'secure' way for the display server to have such information of course..
> That's why security-conscious people invented the ctrl-alt-del keystroke combo that can't be caught by applications and which will always present the system log-on prompt.
A *very incomplete* solution, a trojan game could spawn a window looking like a webpage..
Posted Feb 12, 2013 11:42 UTC (Tue)
by NAR (subscriber, #1313)
[Link] (2 responses)
Would this mean that my xterm wouldn't be able to print the prompt in the window title?
Posted Feb 12, 2013 12:51 UTC (Tue)
by renox (guest, #23785)
[Link] (1 responses)
By default yes.
Posted Feb 12, 2013 19:32 UTC (Tue)
by jond (subscriber, #37669)
[Link]
Posted Feb 12, 2013 18:11 UTC (Tue)
by tshow (subscriber, #6411)
[Link] (9 responses)
So my malicious application requests a window with no decorations and draws its own fake title bar. You have to allow undecorated windows unless you don't want to be able to do things like panels. There's no fixing this without breaking useful functionality.
Posted Feb 12, 2013 21:57 UTC (Tue)
by renox (guest, #23785)
[Link] (2 responses)
Posted Feb 13, 2013 8:15 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link] (1 responses)
It would be a big oversight in the design of wayland if the server could not do that...
Posted Feb 13, 2013 8:42 UTC (Wed)
by renox (guest, #23785)
[Link]
I don't know if this is a big oversight in Wayland's design or not,
Posted Feb 13, 2013 8:09 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link] (5 responses)
One just need to restrict undecorated windows to few trusted applications that are a part of a secure GUI. Done right it would not restrict any useful functionality. For example, a trusted panel still can show status icons and notifications from untrusted applications. And even watching full-screen movies should be possible as window decorations indicating the trust level can appear on any user input.
Posted Feb 13, 2013 10:10 UTC (Wed)
by tnoo (subscriber, #20427)
[Link] (4 responses)
How would this work in a tiling window manager (xmonad, awesome, etc)? These are so great because they don't waste any pixels on useless mostly decorations.
Posted Feb 13, 2013 10:18 UTC (Wed)
by renox (guest, #23785)
[Link]
Posted Feb 13, 2013 10:53 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link] (2 responses)
Some pixels has to be wasted to provide fast visual clues about a trust level. With tiling one can try to color just one edge or a corner of the application in a semi-transparent way to communicate the trust.
Without always present visual clues for passwords one can try to require to press a special trusted key that brings a password-entering GUI that always properly decorate the window. But then one may forget to enter that special key...
Posted Feb 13, 2013 11:42 UTC (Wed)
by tnoo (subscriber, #20427)
[Link] (1 responses)
So, frankly, I don't think that the idea you advocate will be of much use in practice.
Posted Feb 13, 2013 12:37 UTC (Wed)
by renox (guest, #23785)
[Link]
I do, thank you very much. With your reasonment one should remove those visual clues for secure connection? A bad idea.
> So, frankly, I don't think that the idea you advocate will be of much use in practice.
Being useful to those who do these checks is enough to make the feature useful in my opinion.
Posted Feb 13, 2013 12:24 UTC (Wed)
by iq-0 (subscriber, #36655)
[Link] (2 responses)
You'd have to prohibit undecorated windows, transparent windows and full screen windows. All three can be used to create an illusion of a secure input prompt. And even then a basic attack with a maximized window that has in it's content pane something that looks like a secure input window would be enough to fool 80% of the users.
> > That's why security-conscious people invented the ctrl-alt-del keystroke combo that can't be caught by applications and which will always present the system log-on prompt.
The trick is that the special combo can't be grabbed by any normal program *and* that immediately all further input is only tied to a sort of secure input window that is always on top.
If no software can control the mouse or generate fake input for that dialog than it's a pretty sure way of making sure only the person controlling the real input devices is making the input and that that input is only received by the secure input window. And "training" people to enter some magic uncatchable key sequence before entering system credentials helps reduce the risk that they might type it into some malicious window.
This is one of the few things Microsoft did right with UAC. Before granting administrative privileges to a program a use has to press a button in a special popup that isn't visible by other windows programs and which can only be controlled using a few designated input devices (so a malicious program can't click the button itself).
They explicitly don't ask for credentials (so a program faking a security dialog would get no information). And credentials are only required for login and unlock (the "screwup" in this scheme is that a malicious program can pretend to be the login or lock screen and since they no longer train people to always press 'ctrl-alt-del' before login/unlock, there is no way of knowing whether you are typing in the real or a faked variant).
Posted Feb 13, 2013 13:04 UTC (Wed)
by renox (guest, #23785)
[Link] (1 responses)
Posted Feb 13, 2013 14:17 UTC (Wed)
by iq-0 (subscriber, #36655)
[Link]
This trick could be used to implement a ctrl-alt-del out-of-band validation scheme where a QR-code like tag in a webpage can be used to show a separately loaded website (with very explicit origin information and extra strict settings).
And it would be a light-weight alternative to using a separate device with a camera (and people are free to choose whichever they want to use for the OOB validation).
I'd really like Paypal or my bank to support such an optional validation of a transaction and having an implementation baked-in to the system would make for a very light-weight workflow and be a good deal better than the current variant (and an optional 2nd device out-of-band check for local malware problems and possibly even compromised local networks).
Posted Feb 12, 2013 10:50 UTC (Tue)
by lindi (subscriber, #53135)
[Link] (13 responses)
Posted Feb 22, 2013 9:09 UTC (Fri)
by Serge (guest, #84957)
[Link] (12 responses)
That would be Alt+SysRq+K (Secure Access Key aka SAK) under text console and/or Ctrl+Alt+Backspace under X. Supported by any Linux distribution. However some distributions turn them off by default.
Posted Feb 22, 2013 9:49 UTC (Fri)
by khim (subscriber, #9252)
[Link] (6 responses)
Posted Feb 22, 2013 19:58 UTC (Fri)
by Serge (guest, #84957)
[Link] (5 responses)
Sure, they work differently, but they solve the same problem. Think about it, what do you actually need this feature for? It won't protect you from virus deleting all files of your user. It won't help you against trojan, looking for your bank account. There's basically ONE problem that it should protect you from.
Imagine a public machine that different users can log in to. It does not matter, is it windows, linux, graphical or text terminal. User just comes, logs in, does the job and logs out, another user comes, etc. Now one of users creates a "fake-login-program" that looks exactly like a login screen, runs that program and goes home. Another user comes, thinks that it's a real login screen, enters login/password, and "fake-login-program" sends them to the author. That's the problem.
And that's the moment when you need those keys. If you're going to log in to a public Windows machine, you first hit Ctrl+Alt+Del, just in case, and then enter login/password. Same when you come to a public Linux machine, whatever you see on a screen, you first hit "Secure Access Key" (if there's something running, it gets killed, getty will respawn), then enter login/password. That's just another (better) solution to the same problem.
> (and can be used for forensic purposes without immediately killing everything or to just securely lock/unlock workstation's screen).
You can Ctrl+Alt+F1...F6, SAK, log in as root and "only guaranteed secure set of programs" will work there for you. Windows just don't have such a simple thing. :)
Posted Feb 23, 2013 15:19 UTC (Sat)
by khim (subscriber, #9252)
[Link] (4 responses)
Heh. I don't need to imagine that. This is exactly where we use Ctrl-Alt-Del on Windows the most. Yes, it does matter. Very much so. Sorry, but this is where you conveniently change your usecase to make sure you'll win the argument. Why would I log out? In our case it's pretty beefy test Windows system (actually few Windows systems: Windows XP, Windows Vista, Windows 7, Windows 8, etc) which is shared with many other developers. It's pretty beefy system so we don't log out out of it, but use Ctrl-Alt-Del to lock it instead. When machine is locked (and it's usually locked) I can press Ctrl-Alt-Del to guarantee that I'm at the login screen, log into the system as me (all my programs and windows are where I left them), then, when I'm done with testing, lock the screen again. It's safe because all the places where I enter password are under control of the administrator or me, other people's sessions never see my password. Is this a joke? Let's compare. Windows "awful solution": Linux's "better" solution: Do you really believe this convoluted dance which you need to perform again if you left the system for the 3 minutes to go to WC is somehow better then Windows approach? It looks like your information is out of date (as usual for Linux pundit). Windows received this ability in Windows Vista which is six year old by now! Before that it was impossible to combine secure Ctrl-Alt-Del with domains which made this approach not all that practically usable. Microsoft fixed it's usability problem and now it's pleasure to use (and quite safe to boot) while Linux pundits continue to preach that their beloved Linux has perfect solution while in fact it's approach is clearly inferior (it may be theoretically slightly more safe, but in practice it's very easy to use it in unsafe way and quite hard to use it in safe way which means that in practice it's worse). Both Linux and Windows continue to evolve and while some places where Linux is better still remain Windows is better in many, many aspects. Ctrl-Alt-Del vs Alt+SysRq+K/Ctrl+Alt+Backspace is one of them. Think about it: why Ctrl+Alt+Backspace is disabled on many [most?] Linux distributions? It's for a reason! This approach is dangerous: it's very easy to accidentally lose your data. Windows's approach, on the other hand, is not just safe - it's pleasure to use!
Posted Feb 23, 2013 22:12 UTC (Sat)
by Serge (guest, #84957)
[Link] (3 responses)
Good. Many people have a dedicated machine, often more than one, they need neither C-A-Del, nor C-A-BS, nor SAK, they just lock screen and don't worry about such things.
> this is where you conveniently change your usecase to make sure you'll win the argument. Why would I log out?
I explained a more complex case, since I wasn't sure what you actually want. What you have described is a regular switch user (http://i.imgur.com/uhIeO.png) feature that is supported in every distribution around.
To protect from someone creating a screensaver-like tool with fake "Switch User" button, you can configure display manager to autologin on tty1 and run a single program with the large "Switch User" button on it. After that to be safe every time you want to enter the password you press Ctrl+Alt+F1. :)
> Windows received this ability in Windows Vista which is six year old by now!
That's another ability. Under Linux on any virtual console you can run arbitrary "guaranteed secure set of programs", anything you want, not just winlogon dialog.
> Both Linux and Windows continue to evolve and while some places where Linux is better still remain Windows is better in many, many aspects.
Windows is usually better if you got used to it and its bugs. Windows is also often better when it works right out of the box and does exactly what you want. But when you want more, or want to optimize it for your needs, it's easier to configure linux then fight with windows. IMHO, of course.
Posted Feb 23, 2013 23:10 UTC (Sat)
by lindi (subscriber, #53135)
[Link] (2 responses)
Before we can assess the security of your solution I think you need to first implement it. Before we can assess the usability of your solution I think you need at least a few hundred users. Sorry but the devil is usually in the implementation details :) Also, constantly running a second X server on tty1 wastes memory.
Posted Feb 24, 2013 13:19 UTC (Sun)
by khim (subscriber, #9252)
[Link]
This waste is limited and should be similar to what Windows wastes for it's own login screen. So that's not a problem. This is a problem. Serge may argue that his solution is perfect (because it's not implemented and thus you can not argue about it's weaknesses) while I argue that it's extremely bad: solution which exists and is used is always more secure on practice then another solution which does not exist and is only imagined by someone.
Posted Feb 25, 2013 10:46 UTC (Mon)
by Serge (guest, #84957)
[Link]
It's easy. Create user "switcher", configure your DM to autologin it, and set its session to a shell script like this:
while true; do if zenity --info --text 'Switch Session'; then gdmflexiserver; fi; done
Just tested it with Ubuntu and LightDM.
> Before we can assess the usability of your solution I think you need at least a few hundred users.
How are you going to find them? Most people don't need it. Those few who really need it are skilled enough to write one-line-shell-script themselves. :)
> Also, constantly running a second X server on tty1 wastes memory.
Yeah, about 10-15 MB. If that's too much you can replace `zenity` with console `dialog` and run similar script instead of getty. Unlike Windows there're lots of options. :)
Posted Feb 22, 2013 11:16 UTC (Fri)
by lindi (subscriber, #53135)
[Link] (4 responses)
Section "InputClass"
to xorg.conf it manages to kill the server. However, a malicious user can run
setxkbmap -option ""
to disable this. It doesn't seem like ctrl-alt-backspace for designed for security.
Posted Feb 22, 2013 19:22 UTC (Fri)
by Serge (guest, #84957)
[Link] (3 responses)
That user would probably be you, and this is fine, since you should be able to change your settings. If someone else can run arbitrary commands in your session, Xorg is the least of your problems. :) Those setting will be lost as soon as you log out anyway.
> It doesn't seem like ctrl-alt-backspace for designed for security.
I guess it was not designed for security, but you can still use it for security. :) On the other hand Alt+SysRq+K was actually designed for security.
Posted Feb 22, 2013 19:35 UTC (Fri)
by lindi (subscriber, #53135)
[Link] (2 responses)
Posted Feb 22, 2013 20:11 UTC (Fri)
by Serge (guest, #84957)
[Link] (1 responses)
If somebody logged in, disabled terminate sequence and started login screen emulation, you'll notice, that nothing happens when you press Ctrl-Alt-BS. :) But I agree that "Secure Access Key" (Alt+SysRq+K on Linux) is better for that, and it works both for text and X terminals. It's just some distributions disable Magic SysRq keys, while C-A-BS usually works everywhere during login screen.
Posted Feb 22, 2013 20:26 UTC (Fri)
by lindi (subscriber, #53135)
[Link]
Posted Feb 12, 2013 11:17 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link] (3 responses)
http://qubes-os.org solved that by not allowing full-screen windows and by using different colors to differentiate windows with different level of trust. As in Qubes all applications runs inside VMs that have no access to the real hardware, applications cannot influence window decorations.
Now, that is slow and drains batteries, but I guess with current hardware it is just impossible to virtualize GPU without performance impact.
> Microsoft chose to train people to look for darkened desktop with a single authorization popup window in middle of it. I've no idea if this is something no other application can fake, or what the point of that is, but it is a tough problem to solve.
It is possible to fake that as long as the OS allows fullscreen windows and does not use hardware buttons to assert administrative tasks. It is sad that MS abandoned that. Ctrl-Alt-Dell is ugly and hard to a press for a disabled person, but they at least could try to use a different key combination.
And even Google have not fully got that. On Android an app can fake the password-protected lock screen and capture a password as Android does not require to press any hardware buttons to unlock it before drawing virtual keyboard. But at least I can press the home key. If that brings me to home screen, then I know that was a fake.
Posted Feb 12, 2013 18:15 UTC (Tue)
by tshow (subscriber, #6411)
[Link] (1 responses)
Could I make a window that was bigger than the screen, such that all the borders were off screen and the client area effectively filled the visible area? If so, I could draw whatever I wanted on it, and the OS would still think I was a standard client window...
Posted Feb 13, 2013 8:01 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link]
Barring bugs in Qubes that is not possible. All applications runs inside VMs and their virtual display is smaller than actual screen.
Posted Feb 13, 2013 6:44 UTC (Wed)
by nhippi (subscriber, #34640)
[Link]
App store would still need to check that nobody is uploading apps with deceptive icons...
Posted Feb 21, 2013 12:06 UTC (Thu)
by mmarq (guest, #2332)
[Link]
Remotly ssh can(improve)/is encrypted... locally if you are "infected" than is not the display server that could do anything about it...
Yay for LWN
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
-- ctrl+F[1 .. ], Switch to workspace N
-- ctrl+shift+F[1 .. ], View to workspace N
-- meta+F[1 .. ], Move client to workspace N and follow
-- meta+shift+F[1 .. ], Move client to workspace N
-- alt+F[1 .. ], Swap with workspace N and follow
-- alt+shift+F[1 .. ], Swap with workspace N
-------------------------------------------------------------------
-- ctrl+meta+F[1 .. ], Switch to screen N
-- ctrl+meta+shift+F[1 .. ], Move client to screen N
-- alt+meta+F[1 .. ], Swap with screen N and follow
-- alt+meta+shift+F[1 .. ], Swap with screen N
-------------------------------------------------------------------
-- ctrl+alt+[left,right], Switch to workspace to the left or right
-- meta+[left,right], Move window to left or right and follow
-- meta+shift+[left,right], Move window to left or right
-- alt+meta+[left,right], Swap with workspace to left or right and follow
-- alt+meta+shift+[left,right], Swap with workspace to left or right
-------------------------------------------------------------------
-- ctrl+alt+[up,down], Switch to next/previous screen
-- meta+[up,down], Move window to next/previous screen and follow
-- meta+shift+[up,down], Move window to next/previous screen
-- alt+meta+[up,down], Swap with next/previous screen and follow
-- alt+meta+shift+[up,down], Swap with next/previous screen
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
It is also an input demultiplexer, so keystrokes and mouse and touch and whatever can get to the right client.
It might even play a role in the IPC needed for cut/paste etc, I'm not completely clear on that.
You also need an implementation (like westron).
And you need toolkits to render images for the compositor to composite
and you probably need other bits. Maybe even a network transport - you could use X for that or maybe something new and better.
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
1) It uses Linux FB support for modesetting. So no multiple monitors, rotation, etc.
2) It has its own input stack.
3) It required custom kernel modules and direct hardware access from the userspace (ugh).
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Turning those features into plugins won't help much as well. It's just instead of large and complex wayland protocol you'll get a simple protocol together with large and complex weston Plugins API.
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Hm. I closed gnome-panel and opened xfce panel. I'm NOT seeing my applets from GNOME there. Skype's icon is also gone.
Exactly as in X.
Wrong. You simply can reload the plugin. Or write a compositor that simply forwards messages to your out-of-process tray host. In fact, both approaches are being tried right now.
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Except with binary NVidia drivers.
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Wayland the "successor" LCA: The X-men speak
Technically X.org is a replacement of XFree86.
Technically Wayland is something different.
Wayland the "successor" LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
It's not at all uncommon on Windows to have self-written games and graphics apps crash and burn if you don't tell Optimus to use the discrete GPU for that process.
Part of the problem is that some heuristic and app list makes those decisions instead of the app saying during device handle creation that it would prefer the more capable GPU vs the more power efficient GPU. The app knows what it needs out of a GPU, and it should be able to at least hint to the stack about its preferences.
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
NX does not depend on any sort of hacked X server, unless it is doing so under the covers, a la Xnest, which it may be doing.
LCA: The X-men speak
http://www.linuxjournal.com/article/8477?page=0,1
"In these various messages, Gian hinted at the commercial NoMachine NX Server product. The features he outlined sounded exciting enough, but it was another message I took from his postings that thrilled me the most. It was how he outlined that NoMachine had released its main intellectual property, the core NoMachine NX libraries, under the *GPL license*. He wrote, "It is there and everybody can use it."
---------------------
NX combines three essential ingredients:
-----------------------
(there goes the walala hype down the drain)
LCA: The X-men speak
Some links to the videos may be useful.. (Packard, Arlie)
LCA: The X-men speak
- Rob
LCA: The X-men speak
DRI3000
More seriously: if the client allocates a buffer for the whole window including its decoration, isn't there a risk that the client also writes in the "decoration area"?
For example, a malicious client could want to change the window's tittle.
So if you want "security", the server needs to update the decoration area each time the client update its buffer because AFAIK the server cannot know whether the client touched the decoration area or not, so you cannot have the optimisation described in the article.
I'm well aware than you cannot have security with X currently but the same issue happens with server side decoration on top of Wayland (see http://blog.martin-graesslin.com/blog/2013/02/more-ration... especially ihavenoname's comment), with client-side decoration you don't have this security so there is no problem ;-)
DRI3000
DRI3000
Having the display server setting the tittlebar instead of the application is a start with server side decoration.
DRI3000
DRI3000
>
> Would this mean that my xterm wouldn't be able to print the prompt in the window title?
Though one possibility would be to have two parts in the tittlebar one set by the server (safe from tempering) and another set by the client.
DRI3000
DRI3000
DRI3000
Either you loose transparent panels or implement some fixed functionality in the compositor (the way the screensaver is implemented in the compositor) or you need a way for the server to be able to distinguish between trusted clients and the other one, I don't know if this is possible..
DRI3000
DRI3000
> It would be a big oversight in the design of wayland if the server could not do that...
but AFAIK this isn't a part of Wayland's design currently: the "clients" which needs privileges(screen capture, screen locker, visual keyboards) are implemented in the server instead.
DRI3000
DRI3000
DRI3000
DRI3000
DRI3000
There are even some people who provide their credentials (for bank or computer accounts) in emails from a "system administrator".
DRI3000
DRI3000
>
> A *very incomplete* solution, a trojan game could spawn a window looking like a webpage..
DRI3000
DRI3000
DRI3000
DRI3000
Ctrl+Alt+Backspace and Alt+SysRq+K perform distinctly different operation: they kill everything in this console and start new session. Ctrl+Alt+Backspace switches to separate context where only "guaranteed secure" set of programs work (and can be used for forensic purposes without immediately killing everything or to just securely lock/unlock workstation's screen).
DRI3000
DRI3000
DRI3000
Think about it, what do you actually need this feature for?
Imagine a public machine that different users can log in to.
It does not matter, is it windows, linux, graphical or text terminal.
User just comes, logs in, does the job and logs out, another user comes, etc.
If you're going to log in to a public Windows machine, you first hit Ctrl+Alt+Del, just in case, and then enter login/password. Same when you come to a public Linux machine, whatever you see on a screen, you first hit "Secure Access Key" (if there's something running, it gets killed, getty will respawn), then enter login/password. That's just another (better) solution to the same problem.
1. Press Alt-Ctrl-Del.
2. Pick your session.
3. Enter password and start workding.
1. Try to find some free text login screen on some console.
2. Press Alt+SysRq+K to restart everything.
3. Login and use some tools (which ones?) to see if your session is hijaked or not.
4. Do a logout on text console.
5. Switch to a graphical one where your session is still [hopefully] resides and finally
6. Unlock the screen.You can Ctrl+Alt+F1...F6, SAK, log in as root and "only guaranteed secure set of programs" will work there for you. Windows just don't have such a simple thing. :)
DRI3000
DRI3000
DRI3000
Also, constantly running a second X server on tty1 wastes memory.
Before we can assess the security of your solution I think you need to first implement it. Before we can assess the usability of your solution I think you need at least a few hundred users.
DRI3000
DRI3000
Identifier "Keyboard Defaults"
MatchIsKeyboard "yes"
Option "XkbOptions" "terminate:ctrl_alt_bksp"
EndSection
DRI3000
DRI3000
DRI3000
DRI3000
DRI3000
DRI3000
DRI3000
DRI3000
DRI3000