vDSO, 32-bit time, and seccomp
The virtual dynamic shared object (vDSO) mechanism is an optimization provided by the kernel to reduce the cost of certain frequently used system calls. The vDSO is a small region of kernel-provided memory that is normally mapped into the address space of every user-space process; it contains implementations of system calls that can, in some circumstances at least, do their work in a user-space context. That allows the caller to avoid making a real system call and, thus, to avoid the cost of a context switch into kernel mode. System calls related to timekeeping, such as gettimeofday() are implemented in the vDSO, since they can often run quickly in user space and they tend to be called frequently.
The vDSO has generally been implemented in an architecture-specific way, even though the functions it performs are mostly the same across architectures. In the 5.2 development cycle, Vincenzo Frascino added a generic vDSO implementation that factored out much of the architecture-specific code into a single implementation that could be used on all architectures. During the 5.3 merge window, the x86 architecture switched over to the generic version, and all was well — or so it seemed.
seccomp() sadness
In mid-July, Sean Christopherson (among others) reported that the generic vDSO change broke some seccomp() users on 32-bit x86 systems. seccomp(), remember, allows user space to provide a BPF program (still "classic BPF", not eBPF as is used almost everywhere else in a contemporary Linux system) to control which system calls may be made. It is used to reduce the attack surface of code that might be exposed to attackers in one way or another; using it correctly is hard, but the number of users has been on the rise.
While the vDSO can usually implement timekeeping system calls in user space, that is not always possible. If the calling program wants an esoteric clock that has not been implemented, or if the timekeeping hardware available on the system is not amenable to vDSO access, then the vDSO must fall back to calling into the kernel. Prior to 5.3, the architecture-specific vDSO used the native clock_gettime() call on the system it was running on; that meant calling the 32-bit clock_gettime() on 32-bit kernels.
The 32-bit time format is, of course, going to run out of range in January 2038. Quite a bit of work has gone into preparing systems for this particular apocalypse, though much work still remains. Given this problem, adding new users of 32-bit time interfaces is a way to become rather unpopular in kernel-development circles, so the generic vDSO implementation naturally used clock_gettime64() as the fallback timekeeping system call on all architectures. That is not the sort of thing that one would ordinarily even have to think about much; nobody wants to create a generic vDSO implementation that contains yet another year-2038 problem in need of fixing.
But there is a problem here. A surprising number of programs want to know what time it is at some point or another. Anybody putting together a seccomp() policy for a given program will almost certainly allow system calls like gettimeofday(); otherwise the target program will probably break. A program that fails to run is generally secure, but users, being generally unreasonable, tend to get disgruntled anyway.
Any rational seccomp() policy will, thus, allow for the fallback system call when the vDSO is unable to provide the time directly. But it turns out that, while these policies allowed clock_gettime() on 32-bit systems, they lacked the foresight to let clock_gettime64() through as well. The end result is that, when a program protected by one of these seccomp() policies runs on a 5.3 kernel, it is quickly and rudely killed when it tries to make a disallowed system call.
Kernel developers might protest that this change is required to avoid year-2038 problems. They might also be naturally inclined to disregard lame excuses about how clock_gettime64() was never needed before, or about how that system call didn't even exist until the 5.1 release. But, in the end, this is a regression, and the kernel community's policy on such things is fairly unambiguous. Somehow, programs running under existing seccomp() policies will need to continue to work when the final 5.3 kernel comes out.
Fixing the problem
Various ideas were raised for how that could be done, starting with a not-entirely-serious suggestion that the generic vDSO change could simply be reverted. Perhaps seccomp() rules could be bypassed for system calls that originate in the vDSO; this idea didn't get far given that, among other things, faking a vDSO return address is not a difficult thing to do. Bypassing seccomp() for clock_gettime64() specifically is an option, but that would defeat administrators who want to block all access to timekeeping information. The concept of "system-call aliases" was circulated, initially by Andy Lutomirski; it would create a short list of "equivalent" system calls that take the same arguments and do the same thing. If one call in the list was rejected by a seccomp() filter, the kernel would retry the policy with any aliases that might exist.
The alias idea got further than many, but it has problems of its own.
For example, authors of seccomp() policies might genuinely want to
discriminate between "equivalent" system calls. It seems like the sort of
mechanism that could generate surprising results in general. Aliases might
still be the long-term solution for this problem but, as Lutomirski pointed
out, "it's getting quite late to start inventing new seccomp
features to fix this
". Something simpler is needed, at least for
the 5.3 release.
That something is likely to be based on this patch series from Thomas Gleixner, which simply causes the vDSO to fall back to the 32-bit clock_gettime() system call on 32-bit systems. It is a solution that is pleasing to nobody, but it solves the regression issue for now.
Some other solution will be required eventually; it is not possible to support 32-bit time indefinitely. One possibility is that the authors of seccomp() policies change their code to allow clock_gettime64() as well. But, even if that could be done and widely deployed, there is no strong incentive for developers to do this work, since their existing policies will continue to function as intended. Some sort of multi-year deprecation process could be considered as a way to force policies to be fixed. But the eventual solution may just have to live in seccomp() instead, perhaps in the form of an alias list or other special exception. A long-term solution that is pleasing to everybody is difficult to envision.
This situation highlights a problem with seccomp() in general: it
is difficult to write robust policies at that level of detail, and the
resulting policies
tend to be brittle
in the best of times. Even if the kernel community avoids incompatible changes, a
change in a library somewhere can invoke a new system call that a given
seccomp() policy may frown upon. While the OpenBSD pledge()
mechanism may not offer the degree of control provided by
seccomp(), its use of relatively broad categories of
functionality makes it easier to avoid problems like this. But Linux has
seccomp(), with all its power and complexity. It seems highly
likely that developers will unwittingly run into this sort of regression
again in the future.
Index entries for this article | |
---|---|
Kernel | Security/seccomp |
Kernel | vDSO |
Security | Linux kernel/Seccomp |
Posted Aug 2, 2019 18:15 UTC (Fri)
by dullfire (guest, #111432)
[Link] (13 responses)
Or we just ditch all precompiled 32-bit programs with builtin seccomp in 2038
Posted Aug 2, 2019 21:12 UTC (Fri)
by arnd (subscriber, #8866)
[Link]
The problem with seccomp is much bigger when an application is recompiled with the time64 C library interfaces that have to use the 64-bit system calls. However, when you do that, you also have to deal with other problems, this is just one of many things we need to address to have a 32-bit distro that can survive y2038, and one of many things that can go with seccomp as we add new system calls that act as replacements for old ones.
Posted Aug 3, 2019 22:41 UTC (Sat)
by flussence (guest, #85566)
[Link] (11 responses)
Posted Aug 4, 2019 20:40 UTC (Sun)
by roc (subscriber, #30627)
[Link] (10 responses)
It's not pledge(), which is "we have studied all the applications anyone has ever written or ever will write and come up with a list of set of policies that work for them".
Posted Aug 4, 2019 20:45 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 4, 2019 22:10 UTC (Sun)
by khim (subscriber, #9252)
[Link] (1 responses)
Then kernel would know which syscalls are "alien" to this particular version and would use it's alias database.
Heck, this way you could introduce some "fake" versions which only know about very few syscalls (and thus only allow rough yet simple setup).
This is similar to how Android (well, bionic) handles such things - and works well enough in practice (even if it's easy to construct artificial example which would fall apart in such scheme)
Posted Aug 17, 2019 5:13 UTC (Sat)
by gnoack (subscriber, #131611)
[Link]
For example, different libcs use different syscalls, which is the first thing to be compatible with.
Shared library loading can lead to very unexpected behaviour as well. LD_PRELOAD is one example. Another one is that when resolving hostnames, libnss in glibc loads shared modules for resolution behavior, and it's very difficult to predict what these will do. (OpenBSDs pledge has a special case for DNS as well, I believe so that they can distinguish between DNS and other UDP.)
In the end, with seccomp you need a very good control of how a program is built, which libc it uses, and in the case of glibc+DNS even how the system is configured. That seems unrealistic.
Posted Aug 6, 2019 7:42 UTC (Tue)
by mm7323 (subscriber, #87386)
[Link] (5 responses)
Relocation processing and such may make this fiddly to implement, but given most things would by dynamically linked against glibc where the system calls commonly come from, it might be possible to reduce overhead to just when loading that shared library with minimal loss for most other programs.
Posted Aug 6, 2019 23:37 UTC (Tue)
by roc (subscriber, #30627)
[Link] (1 responses)
Posted Aug 7, 2019 11:31 UTC (Wed)
by mm7323 (subscriber, #87386)
[Link]
The other 1% of uses may be either bugs, bad code, or exploitable gadgets? It would be interesting research to find out.
Posted Aug 9, 2019 15:03 UTC (Fri)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Aug 9, 2019 15:06 UTC (Fri)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Aug 10, 2019 6:47 UTC (Sat)
by mm7323 (subscriber, #87386)
[Link]
That's why I suggest verifying the return addresses as well as call sites - to make chaining ROP gadgets harder. Combined with something like Pointer Authentication Codes in user space, this could button up call flows nicely to ensure code executes as designed when compiled.
That said, I'm not sure if it is possible to 'fake' the return address of a supervisor call or exception on any architectures.
> (You'd have to check that the stacks return to loci where there are actually function calls, and that's going to be much more expensive.)
All security has an overhead. The question is whether such a system could be made efficient enough to be worth the benefit. The idea here is be to leverage the compiler to produce the needed records and fix them up when loading/dynamic linking so that execution overhead could be as simple as some table lookups in the kernel around system calls. It will never be for free, and even hardware assisted things like PAC add instructions.
Posted Aug 8, 2019 17:25 UTC (Thu)
by flussence (guest, #85566)
[Link]
Posted Aug 2, 2019 18:21 UTC (Fri)
by luto (guest, #39314)
[Link]
Posted Aug 2, 2019 20:02 UTC (Fri)
by chris_se (subscriber, #99706)
[Link] (32 responses)
And from a historical perspective it's always been the case that any
Posted Aug 2, 2019 20:27 UTC (Fri)
by nix (subscriber, #2304)
[Link] (31 responses)
Posted Aug 2, 2019 23:19 UTC (Fri)
by quotemstr (subscriber, #45331)
[Link] (30 responses)
Posted Aug 2, 2019 23:33 UTC (Fri)
by mirabilos (subscriber, #84359)
[Link] (28 responses)
I agree, this is ridiculous.
Posted Aug 3, 2019 0:57 UTC (Sat)
by nix (subscriber, #2304)
[Link] (27 responses)
This is ridiculous. It drives a truck through ABI stability guarantees, even guarantees as carefully maintained as (say) glibc's.
Posted Aug 3, 2019 1:06 UTC (Sat)
by quotemstr (subscriber, #45331)
[Link] (20 responses)
Posted Aug 3, 2019 4:38 UTC (Sat)
by NYKevin (subscriber, #129325)
[Link] (19 responses)
A hypothetical crypto library should not need to call into the sockets API, create processes, manipulate shared memory, access the filesystem, or do a wide variety of other I/O-ish things. A malicious actor trying to exploit a buffer overrun would very much like to do those things, for all manner of reasons, but particularly for key exfiltration. We can reasonably foresee a malicious actor being able to cause such a buffer overrun in a crypto library, because it's actually happened numerous times. Not all of those bugs would have been stopped by seccomp (see for example Heartbleed), but no security measure claims to solve all problems.
At the other extreme, of course a shell is going to call all manner of I/O syscalls (except *maybe* for the sockets API). It really doesn't make sense to try and limit what a shell can do, because the whole point of a shell is to facilitate arbitrary code execution (by the user who is typing commands). Yes, restricted shells exist, but those tend to be sandboxed along different dimensions than "which syscalls are fair game."
Most software is going to fall somewhere between these extremes. So where does that leave us? If I were an upstream, the lesson I would take from this is to just write sensible code, and let downstreams figure out their own security policies. If they file a bug telling me that some of my code is unreasonable, and therefore tripping seccomp, I might fix it. If they file a bug telling me that my code does something that is inconvenient for them, but not unreasonable from where I sit, I would WONTFIX it and let the pieces fall where they may.
Posted Aug 3, 2019 6:17 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (13 responses)
> create processes
> manipulate shared memory
> access the filesystem
Posted Aug 3, 2019 7:01 UTC (Sat)
by NYKevin (subscriber, #129325)
[Link] (3 responses)
Sure, if that's the specific thing that you are doing. But then the application logic knows you are doing that, and can avoid sandboxing it.
> Or it might need to make outgoing connections to validate CRLs, for example.
Gods, no. If the application wants to use a CRL, it downloads it separately, and before applying the sandbox. The crypto library could, of course, provide a helper function for that, but it should not be part of the "main" codepath unless the caller has somehow asked for it. You don't make outgoing connections behind the application code's back.
> Read CA bundles.
read(2) poses substantially less of a security risk than write(2) and open(2), so I don't actually have a problem with this.
Posted Aug 3, 2019 9:24 UTC (Sat)
by storner (subscriber, #119)
[Link] (2 responses)
>Gods, no. If the application wants to use a CRL, it downloads it separately, and before applying the sandbox. The crypto library could, of course, provide a helper >function for that, but it should not be part of the "main" codepath unless the caller has somehow asked for it. You don't make outgoing connections behind the >application code's back.
Gods, no. CRL's from a public CA are huge and the cost (time, bandwidth, storage) of downloading one would be prohibitive in most cases. You normally use OCSP which requires an HTTP(S) network connection. So socket/network access is needed.
Posted Aug 3, 2019 10:56 UTC (Sat)
by chris_se (subscriber, #99706)
[Link]
Although in an ideal word everybody would use OCSP Stapling - that way it wouldn't require the client to do OCSP requests to arbitrary destinations, and only each server would need to perform such a request every two days or so, and that only to its own CA.
Posted Aug 5, 2019 18:20 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link]
Posted Aug 4, 2019 20:27 UTC (Sun)
by rwmj (subscriber, #5474)
[Link] (8 responses)
Posted Aug 4, 2019 21:00 UTC (Sun)
by roc (subscriber, #30627)
[Link] (7 responses)
For example almost every application needs read(). Most don't need the features provided by preadv2(), and those features trigger execution of a bunch of relatively new and untested kernel code. How would you use capabilities to control the ability of a confined process to access those features?
Posted Aug 4, 2019 21:11 UTC (Sun)
by quotemstr (subscriber, #45331)
[Link] (4 responses)
It's circular: we have to block them because they're rare, and they're rare because we block them. We can't make progress that way.
I'm all for addressing specific known vulnerabilities, but this practice is reflexively blocking anything new has got to stop.
Posted Aug 4, 2019 21:36 UTC (Sun)
by roc (subscriber, #30627)
[Link] (3 responses)
Also, many seccomp policies are tailed to the needs of the software they confine, rather than the other way around. Don't tell Chrome or Firefox that they should stop using seccomp policies to sandbox their browser processes because the kernel community needs additional testing of kernel code ... which their browser processes only exercise if they've been compromised.
Posted Aug 5, 2019 0:04 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Raw syscall filtering really is looking like a bad solution.
Posted Aug 5, 2019 0:49 UTC (Mon)
by roc (subscriber, #30627)
[Link] (1 responses)
But that has nothing to do with this sub-thread, which is about whether capabilities obviate the need for seccomp.
Posted Aug 5, 2019 3:51 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 5, 2019 14:06 UTC (Mon)
by MarcB (subscriber, #101804)
[Link] (1 responses)
Should it be some "personal firewall" to protect potentially vulnerable kernel code or should it restrict the functionality available to processes based on their needs (i.e. classical sandboxing)?
Personally, I think only the second concept is feasible. In that approach, there would be no difference whatsover between read() and preadv2() - or clock_gettime64() and clock_gettime(). Those syscalls are equivalent in the sense that they allow a process to do exactly the same things.
If seccomp is used to filter arbitrary syscalls, this will lead to ossifications (can't reliably use new syscalls) and maintenance or portability nightmares (just look at the circumstances needed to trigger this problem here). And frankly, if the Linux kernel really needed such a protective filter, it would be high time to switch operating systems (or to significantly change Linux' development process wrt syscalls).
Applications and administrators should define security in term of the security model provided by the operating system and not start second-guessing it. Doing so would cause the same madness operating system developers are currently experiencing with those hardware vulnerabilities, but on a much larger scale.
Posted Aug 5, 2019 21:48 UTC (Mon)
by roc (subscriber, #30627)
[Link]
> And frankly, if the Linux kernel really needed such a protective filter,
It does. See https://events.linuxfoundation.org/wp-content/uploads/201...
> it would be high time to switch operating systems (or to significantly change Linux' development process wrt syscalls).
Maybe so but for now seccomp-bpf is needed.
Posted Aug 3, 2019 18:22 UTC (Sat)
by dullfire (guest, #111432)
[Link] (4 responses)
A crypto lib, in a program that can not do any of those things is kind of useless. (or alternately, last I check seccomp applies to processes not shared libs)
Posted Aug 3, 2019 19:51 UTC (Sat)
by mirabilos (subscriber, #84359)
[Link]
Posted Aug 5, 2019 13:09 UTC (Mon)
by leromarinvit (subscriber, #56850)
[Link] (2 responses)
Posted Aug 5, 2019 13:27 UTC (Mon)
by dullfire (guest, #111432)
[Link] (1 responses)
Posted Aug 5, 2019 15:59 UTC (Mon)
by nybble41 (subscriber, #55106)
[Link]
Posted Aug 3, 2019 1:15 UTC (Sat)
by mirabilos (subscriber, #84359)
[Link] (4 responses)
In contrast to the freedesktop.org/systemd/GNOME people and, apparently, Google, I care for more than just GNU/Linux/{amd,arm}64.
Posted Aug 3, 2019 15:01 UTC (Sat)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Aug 3, 2019 16:17 UTC (Sat)
by nivedita76 (subscriber, #121790)
[Link]
Posted Aug 5, 2019 16:32 UTC (Mon)
by josh (subscriber, #17465)
[Link] (1 responses)
Also, I'd be curious what problems you've observed with the access system call on various operating systems.
Posted Aug 22, 2019 22:13 UTC (Thu)
by mirabilos (subscriber, #84359)
[Link]
the shell uses stat and looks at the various bits (mtime, mode, …) for tests.
The condition “read-only filesystem” is not in the scope of the tests (it’s more of a run-time vs. how-the-fs-tree-is-set-up question) and EROFS will be thrown on actual accesses by the kernel.
Most tests are very low-level:
-g file file's mode has the setgid bit set.
Others aren’t, but…
-w file file exists and is writable.
… considering this is a Unix shell, the Unix file attributes are checked, no extended ones, and I know of no portable way to check for them. (That being said, I do not deal with extended attributes at all, and mksh is normally developed on MirBSD which doesn’t have them anyway, but I understand at least OS/2 and Cygwin/Interix/UWIN/PW32 out of the supported platforms do, if HPFS/NTFS is the underlying filesystem; I’m not familiar enough with these.)
I’d have to look why access(2) is not normally used. If it’s only false negatives, we could check _both_ access and stat, and if one fails return a failure. This would be dead slow on most operating systems, so I’d only enable it for those that really need it.
I do know that access(2) says the file is executable if the caller is root and the file isn’t. There’s already an access wrapper in the code, and another one for OS/2 (that deals with adding .exe automatically if needed)…
Posted Aug 4, 2019 22:37 UTC (Sun)
by marcH (subscriber, #57642)
[Link]
> the generic vDSO implementation naturally used clock_gettime64() as the fallback timekeeping system call on all architectures.
> During the 5.3 merge window, the x86 architecture switched over to the generic version,
If the version of clock_gettime() invoked was really the *internal* implementation detail it seemed to be, there wouldn't have been any issue. Just like firewalls, the seccomp approach doesn't seem to care about layers and abstractions. This basically "promotes" internal implementation details to API rank, right? What could possibly go wrong.
> Even if the kernel community avoids incompatible changes, a change in a library somewhere can invoke a new system call that a given seccomp() policy may frown upon.
Sounds like a "yes".
Posted Aug 4, 2019 21:04 UTC (Sun)
by roc (subscriber, #30627)
[Link]
Posted Aug 4, 2019 7:49 UTC (Sun)
by epa (subscriber, #39769)
[Link]
Posted Aug 5, 2019 19:07 UTC (Mon)
by madscientist (subscriber, #16861)
[Link] (2 responses)
The use of "esoteric" here is IMO misleading. Any clock that doesn't have vDSO support is essentially useless unless you only want to call it rarely... and most of the nonstandard clocks are there precisely to provide the kind of precise timing that is needed when calling them often.
I'm glad that as of the generic rewrite it appears that CLOCK_MONOTONIC_RAW will _finally_ get the vDSO treatment (on intel). This clock has been known to be virtually useless for years, with many blog posts pointing out (often without understanding why) that it's hundreds of times slower than CLOCK_MONOTONIC even though its behavior is actually what people want when measuring time intervals and the clock_gettime() man page makes it sound like it should be the most efficient option.
If you investigate the reasons why vDSO CLOCK_MONOTONIC_RAW isn't already available you'll run across a somewhat depressing example of the kernel development model failing.
Posted Aug 6, 2019 13:24 UTC (Tue)
by luto (guest, #39314)
[Link] (1 responses)
CLOCK_MONOTONIC_RAW for the x86 vDDO was merged just a couple months after patches showed up. If there were significantly earlier requests, no one told me about them, and I’m the maintainer.
Posted Aug 6, 2019 18:01 UTC (Tue)
by madscientist (subscriber, #16861)
[Link]
Google shows that patches to add vDSO for Intel CLOCK_MONOTONIC_RAW were sent in March 2018 but it seems they were never applied; I can't find info on them via Google or "git log --grep".
Posted Aug 5, 2019 23:41 UTC (Mon)
by dezgeg (subscriber, #92243)
[Link] (1 responses)
But to match the existing ABI of clock_gettime(), the return value of the function will have to fit a 32-bit struct timespec anyway in the end. So how is it an improvement to have the VDSO to make a 64-bit clock_gettime64() call just to immediately truncate the seconds to 32 bits? Am I missing something?
Posted Aug 17, 2019 7:43 UTC (Sat)
by mcortese (guest, #52099)
[Link]
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
decides to switch its stat() wrapper to use the new statx() system call
(for similar reasons) - then any seccomp policy (which is defined by
programs outside of glibc) allowing stat() but not statx() would
suddenly start to kill programs left and right. Sure, in this case it
was the vDSO of the kernel instead of glibc that caused the problem,
but in both cases the upgrade of a very basic system component broke
the application.
wrapper around a system call may internally do other things as well,
as long as it follows the documented contract. seccomp() breaks this
understanding that has long existed to some extent.
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
> call into the sockets API
Except to set up the kernel-level TLS acceleration. Or it might need to make outgoing connections to validate CRLs, for example.
OK.
Except if it wants to use uring, maybe?
> or do a wide variety of other I/O-ish things.
Read CA bundles.
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
The situation has not improved.
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
a tangent (was vDSO, 32-bit time, and seccomp)
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
Put a version number on the policy
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp
vDSO, 32-bit time, and seccomp