|
|
Subscribe / Log in / New account

XDC2012: Graphics stack security

XDC2012: Graphics stack security

Posted Sep 26, 2012 11:23 UTC (Wed) by k3ninho (subscriber, #50375)
Parent article: XDC2012: Graphics stack security

>Martin observed that a GPU buffer is not zeroed when it is allocated, meaning that the previous user's data is visible to the new user. This could create a confidentiality issue. The problem is that zeroing buffers has a heavy performance impact. He suggested two strategies for dealing with this: zeroing deallocated buffers when the CPU is idle and using the GPU to perform zeroing of buffers.

Is that 'zero-ing when not busy' actually a request for an in-kernel TRIM thread? Such a thread might also need to make promises about when its work eventually gets done, but that could be mitigated with a fallback plan for a on a busy system: explicitly wipe the memory before it's re-used. Do I hear "show me the code"?

Ken.


to post comments

XDC2012: Graphics stack security

Posted Sep 26, 2012 13:43 UTC (Wed) by mupuf (subscriber, #86890) [Link]

Here are basically my thoughts about this:
- each VRAM page should be marked with the id of the GPU user that created it.
- the wiping process requires another attribute (being_wiped) on each page. Before scheduling a bunch of page wiping, the correspondent pages should be marked "being_wiped". When the wiping process is done, the pages should be marked wiped (not linked to a single user) and the being_wiped should be cleared.
- when allocating memory for a user, memory pages that were previously used by this user should be use for the allocation. When none is left, wiped memory pages should be used. If none are left but there are pages being wiped, then the allocator should wait for them to be wiped before using them. Otherwise, just return ENOMEM.

The wiping process should be run on the whole VRAM at boot time and then can be scheduled when the pool of wiped pages is getting low. This should lower the number of wiping and thus, lower the VRAM bandwidth usage.
I expect the performance-impact to be minimal in most cases. However, it makes the allocation more complicated and more memory-consuming.

On the other hand, if we want all buffers to be wiped at allocation time, then we can lower the allocation complexity at the expense of memory-bandwidth usage (more wiping are needed).

XDC2012: Graphics stack security

Posted Sep 26, 2012 16:16 UTC (Wed) by gioele (subscriber, #61675) [Link] (2 responses)

Why is VRAM treated differently from plain RAM? Can't a sanitizing step like https://lwn.net/Articles/334747/ be implemented for VRAM?

A more general question: why are GPUs being treated differently from CPUs or coprocessors? We are there: GPUs need scheduling, process compartmentalisation, per-user limits, and so on, just like CPUs. It looks like people are reinventing everything on GPUs instead of just extending the existing concepts (if not code) to GPUs.

XDC2012: Graphics stack security

Posted Sep 26, 2012 16:42 UTC (Wed) by mupuf (subscriber, #86890) [Link] (1 responses)

Memory management on GPUs is very hardware-specific on GPUs. For instance, some buffers can be tiled or not depending on some conditions. It is also possible to "swap" VRAM buffers inside the RAM but certainly not on the hard disk drive. In its current forms, it is almost impossible to share code between the RAM allocator and GPU VRAM allocator (that is driver-dependent).

In the end, we are "reinventing the wheel" but the concepts still holds (I didn't have the idea overnight for VRAM sanitization). However, the code is completely different because GPUs are more complex than CPUs.

As for why GPUs aren't considered like CPUs, the reason is that they are built for throughput and aren't ready for being considered as a new processor (allmost no GPUs support preemption). We are slowly but surely going towards this direction but it is too early for us to design an API that would accommodate for both CPUs and GPUs.

In the end, we can say that we indeed extend the already-existing CPU concepts to GPUs but it takes time and the GPU ecosystem is much fragmented than the CPU world. Moreover, most GPUs (if not all), aren't ready to be considered as an equivalent of the x86 (not even remotely).

XDC2012: Graphics stack security

Posted Sep 27, 2012 16:37 UTC (Thu) by ortalo (guest, #4654) [Link]

I remember reading hardware documentation of something similar to adress translation tables for onboard VRAM on a Cirrus Logic Laguna3D graphics chipset circa... 1996. Good wheels get reinvented so often... ;-)

But even if common hardware protection features are not (uniformly at least) available on GPUs, from what we know about other hardware, it seems that memory protection and priviledged instructions are foundational features for practical security mechanisms.

If these features are not in the hardware I wonder if it's even possible to adress the issue of graphics stack security for multiple applications without:
1) either emulating them (something which apparently has started to appear but does not seem to be universally agreed upon);
2) or evolving the security model to adress a different kind of security features (for example: forbidd applications with conflicting security requirement to use the same "screen").

By the way, maybe option 2 is really workable in the context of graphical applications. Maybe there is a need for a risk/attacks analysis at a higher level and more thinking about the most important security features in order to provide a decent (if not completely satisfactory) implementation.

For example, I would not mind trading a long context switch time (~1s) for access to full hardware control for a fullscreen application (game) while I may be reluctant to do that for a text editor and framebuffer security may even be a reason not to use a GUI for an encryption tool.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds