|
|
Subscribe / Log in / New account

XDC2012: Graphics stack security

XDC2012: Graphics stack security

Posted Sep 26, 2012 16:16 UTC (Wed) by gioele (subscriber, #61675)
In reply to: XDC2012: Graphics stack security by k3ninho
Parent article: XDC2012: Graphics stack security

Why is VRAM treated differently from plain RAM? Can't a sanitizing step like https://lwn.net/Articles/334747/ be implemented for VRAM?

A more general question: why are GPUs being treated differently from CPUs or coprocessors? We are there: GPUs need scheduling, process compartmentalisation, per-user limits, and so on, just like CPUs. It looks like people are reinventing everything on GPUs instead of just extending the existing concepts (if not code) to GPUs.


to post comments

XDC2012: Graphics stack security

Posted Sep 26, 2012 16:42 UTC (Wed) by mupuf (subscriber, #86890) [Link] (1 responses)

Memory management on GPUs is very hardware-specific on GPUs. For instance, some buffers can be tiled or not depending on some conditions. It is also possible to "swap" VRAM buffers inside the RAM but certainly not on the hard disk drive. In its current forms, it is almost impossible to share code between the RAM allocator and GPU VRAM allocator (that is driver-dependent).

In the end, we are "reinventing the wheel" but the concepts still holds (I didn't have the idea overnight for VRAM sanitization). However, the code is completely different because GPUs are more complex than CPUs.

As for why GPUs aren't considered like CPUs, the reason is that they are built for throughput and aren't ready for being considered as a new processor (allmost no GPUs support preemption). We are slowly but surely going towards this direction but it is too early for us to design an API that would accommodate for both CPUs and GPUs.

In the end, we can say that we indeed extend the already-existing CPU concepts to GPUs but it takes time and the GPU ecosystem is much fragmented than the CPU world. Moreover, most GPUs (if not all), aren't ready to be considered as an equivalent of the x86 (not even remotely).

XDC2012: Graphics stack security

Posted Sep 27, 2012 16:37 UTC (Thu) by ortalo (guest, #4654) [Link]

I remember reading hardware documentation of something similar to adress translation tables for onboard VRAM on a Cirrus Logic Laguna3D graphics chipset circa... 1996. Good wheels get reinvented so often... ;-)

But even if common hardware protection features are not (uniformly at least) available on GPUs, from what we know about other hardware, it seems that memory protection and priviledged instructions are foundational features for practical security mechanisms.

If these features are not in the hardware I wonder if it's even possible to adress the issue of graphics stack security for multiple applications without:
1) either emulating them (something which apparently has started to appear but does not seem to be universally agreed upon);
2) or evolving the security model to adress a different kind of security features (for example: forbidd applications with conflicting security requirement to use the same "screen").

By the way, maybe option 2 is really workable in the context of graphical applications. Maybe there is a need for a risk/attacks analysis at a higher level and more thinking about the most important security features in order to provide a decent (if not completely satisfactory) implementation.

For example, I would not mind trading a long context switch time (~1s) for access to full hardware control for a fullscreen application (game) while I may be reluctant to do that for a text editor and framebuffer security may even be a reason not to use a GUI for an encryption tool.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds