Keeping memory contents secret
Keeping memory contents secret
Posted Nov 15, 2019 22:04 UTC (Fri) by roc (subscriber, #30627)In reply to: Keeping memory contents secret by SEJeff
Parent article: Keeping memory contents secret
Posted Nov 16, 2019 0:54 UTC (Sat)
by wahern (subscriber, #37304)
[Link] (5 responses)
The confidentiality guarantee of SEV was eviscerated by timing attacks long before SGX was eviscerated, though certainly after the SEV papers nobody should have trusted SGX absent affirmative evidence that such timing attacks weren't going to keep metastasizing. The benefit of SEV is effectively the same as simply not mapping certain userspace regions into kernel space. If you control the kernel/hypervisor you can read the memory either way--either the easy way or, with SEV, the "hard way", which is actually not that difficult per the published attacks.
[1] AFAIU, SGX permits the userspace library to sign a challenge from the content provider; the content provider in turns asks Intel (through an online and, presumably, very expensive web service) to confirm validity. The kernel has nothing to do with any of this except for some basic initialization and management of SGX, which AFAIU is still absent in Linus' tree.
Posted Nov 16, 2019 9:14 UTC (Sat)
by roc (subscriber, #30627)
[Link] (2 responses)
Posted Nov 16, 2019 9:55 UTC (Sat)
by wahern (subscriber, #37304)
[Link] (1 responses)
Here's an interesting/confusing HN thread from 10 months ago with various claims: https://news.ycombinator.com/item?id=18828654. Despite the contradictory assertions, there's enough context, like the distinction between L1 and L3, to suggest hardware-backed DRM schemes are being relied upon even by Firebox and Chrome, but probably not using SGX, especially considering the Netflix requirements.
Posted Nov 25, 2019 10:48 UTC (Mon)
by sandeep_89 (guest, #127524)
[Link]
Posted Nov 16, 2019 9:31 UTC (Sat)
by wahern (subscriber, #37304)
[Link] (1 responses)
In any event, with or without remote attestation it's not reasonable to trust that a guest's memory is unreadable from a host; these technologies aren't holding up well to side channel attacks, not even on AMD chips, which have otherwise been comparably resilient. Better to consider it as defense-in-depth.
I think all of these developments augur *against* providing exacting semantics for anything promising confidentiality. The situation is *far* too fluid. We can't even say with strong confidence that SEV, not to mention SME or SGX, suffice. Any interface will be best effort as a practical matter, and will very likely need to be tweaked in the future in ways that change the performance and security characteristics. If you don't want developers to develop a false sense of security, then keep things conspicuously vague! Alternatively or in addition, avoid abstraction and pass through specific architecture interfaces and semantics as closely as possible, conspicuously passing along the risk, uncertainty, and responsibility to the developer. Anyhow, sometimes security is best served by recognizing that choice is an illusion and avoid giving choices.
The irony is that aside from attestation and physical attacks, the demand and urgency of these things come from the failures of existing software and hardware to meet *existing* confidentiality and security guarantees; guarantees that should already suffice. We should think twice about writing any more checks (i.e. particular confidentiality semantics) we aren't absolutely sure can be cashed. Anyhow, no company would care whether an AWS hypervisor could read guest memory if they could absolutely expect AWS' software and hardware to work as designed. The desire for zero-trust only exists in the minds of geeks, techno-libertarians, and Hollywood studios. Organizations are only going to depend on these new features to prove service providers' and their own diligence. That's not a cynical statement, just a recognition that at the end of the day they depend, and must depend, on the geeks to make reasonable decisions and continual adjustments. And the same is true for these types of security issues when it comes to the relationship between kernel interfaces and userland applications. The history of system entropy interfaces is particularly instructive.
Posted Nov 18, 2019 18:49 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link]
If I had to pick between a best-effort, vague interface, or a specific interface that's tied to implementation details, I'm pretty sure the former is more future proof. In the best case scenario, we can opportunistically begin offering real guarantees as they become available, and in a worst case scenario, we can just deprecate the whole thing since it never offered any guarantees to begin with.
> Anyhow, no company would care whether an AWS hypervisor could read guest memory if they could absolutely expect AWS' software and hardware to work as designed. The desire for zero-trust only exists in the minds of geeks, techno-libertarians, and Hollywood studios.
Certain industries have a tendency to ask for guarantees that are perhaps unnecessary or impractical, but are nevertheless required by some combination of laws, regulations, and industry standards. See for example PCI DSS, HIPAA, FIPS, and so on. It is entirely fair to think that this is a foolish thing for those industries to do, but ultimately, it's their money, and they are choosing to spend it (indirectly via AWS et al.) on building these features into the kernel.
Keeping memory contents secret
Keeping memory contents secret
Keeping memory contents secret
Keeping memory contents secret
Keeping memory contents secret
Keeping memory contents secret
