|
|
Log in / Subscribe / Register

Walsh: Introducing the SELinux Sandbox

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 3:37 UTC (Wed) by Kit (guest, #55925)
In reply to: Walsh: Introducing the SELinux Sandbox by spender
Parent article: Walsh: Introducing the SELinux Sandbox

>I'm assuming the "no actual compromise [...] has succeeded" part of the above comment was a typo.
I think you're misinterpreting what he wrote. I read it as meaning that the _goal_ is to stop the malicious program _early enough_ (by limiting system access) that the exploit won't have a chance to succeed, by limiting the surface area of the system (not that it's already 100% effective today, just that is the goal/'focus').

For example, why on earth does a web browser need r/w access to all of the user's files all the time? What about confining the browser so it can only read/write its files (configuration, cache, history, etc) but it has no DIRECT access to any other files? Downloading/Uploading could partly be partitioned off to a seperate process... for a download... Browser asks the other process it wants to save a file named 'NAME', and the other process opens the file dialog for the user to select the file... then the browser pipes the data to the other process, and that other process writes it to disk (upload would be pretty much the same, just reverse the data flow). This'd basically limit browser based exploits to *only* being able to steal your browser's own data (and would also likely not be able to persist across instances), unless an additional exploit or two are also found in the limited area that the browser can actually access (which'd still be a huge improvement over the current model of 'screw the user, protect /bin/bash!' for non-server systems).


to post comments

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 4:02 UTC (Wed) by spender (guest, #23067) [Link] (10 responses)

You really should refrain from using words like "only" (especially emphasized) when talking about what arbitrary code executing in the context of a large piece of software with many dependencies and addons is limited to doing. You didn't mention kernel compromises that disable SELinux in your list of things it's limited to. Take the vmsplice exploit for instance. That exploit required mmap, munmap, pipe, and vmsplice, only 4 things which all processes on the machine were permitted to use. What files of the user the exploit could write to didn't even come into the picture.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 6:50 UTC (Wed) by hppnq (guest, #14462) [Link] (7 responses)

But in the same thread you say that it might be a good idea to improve the security of the kernel itself. How would patching the kernel help against kernel bugs?

Note that the vmsplice vulnerability needs to be exploited, actually. While it is obvious that this also actually happens, somewhere, this does not mean that we should therefore grab our wands and blow all vmsplice vulnerabilities into oblivion. I think it is rather obvious that it is a better idea to create an architecture that itself can be proven to be more secure than thinking that a random pile of code can ever be made completely secure.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:40 UTC (Wed) by spender (guest, #23067) [Link] (6 responses)

I'm a little confused by your post; maybe you can clarify a few things for me:

1) Was the use of "vulnerability" in italics a way of correcting my use of the phrase "vmsplice exploit"? My usage was correct -- I was referring to the actual publicly released exploit for the vulnerability so that I could comment on what system calls were used in it.

2) "How would patching the kernel help against kernel bugs?" Take NULL pointer dereference vulnerabilities as an example. If the kernel is unable to access userland memory directly, then these vulnerabilities become unexploitable for anything but a DoS. Would you not consider that patching of the kernel "help against kernel [vulnerabilities]"?

3) What's this architecture you're referring to? Are you saying the only options are fixing individual bugs or throwing SELinux-level complexity at the problem?

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 10:29 UTC (Thu) by hppnq (guest, #14462) [Link] (5 responses)

Was the use of "vulnerability" in italics a way of correcting my use of the phrase "vmsplice exploit"?

Err, no. It was meant to stress "vulnerability". Even if an exploit for a vulnerability exists -- and let's just assume this is always the case -- it does not mean that you are also actually vulnerable. This is perhaps the most important part of security management: know your vulnerabilities. I mentioned it because this is something you seem to overlook. There is nothing wrong with that in discussions about specific vulnerabilities, but you are dismissing entire frameworks here.

Would you not consider that patching of the kernel "help against kernel [vulnerabilities]"?

Of course it would help making the kernel more secure. But it will not rule out kernel bugs. What's more: it seems a bad idea to think that any specific part of the kernel is able to protect the kernel.

What's this architecture you're referring to?

The architecture of which, for instance, SELinux is a part. Or grsecurity. Or my shielded network cable. As opposed to saying "this piece of code is secure".

Are you saying the only options are fixing individual bugs or throwing SELinux-level complexity at the problem?

No. I am saying that security follows from principles. A bug-free kernel with a perfect SELinux implementation would still not make most people safe -- whatever "safe" means for them.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 12:48 UTC (Thu) by spender (guest, #23067) [Link] (4 responses)

I never talked about ruling out kernel bugs, you did. I was talking about making the kernel more secure, which involves reducing the number of exploitable vulnerabilities -- specifically by making certain vulnerability classes unexploitable.

"it seems a bad idea to think that any specific part of the kernel is able to protect the kernel." -- I just gave you an example that you agreed makes the kernel more secure. Do you need more examples? I listed them in another post. Why's it such a bad idea to make classes of vulnerabilities unexploitable and thus prevent someone from being able to take advantage of an applicable vulnerability for the purpose of arbitrary code execution in the kernel?

You said that it's a bad idea to think that the kernel can protect itself, but then you go on to say that SELinux is a good way of protecting the kernel (or at least, better than fixing a couple bugs). I agree -- it's better than fixing a couple bugs, but it's the wrong approach for protecting the kernel. And like it or not, a lot of what SELinux bothers itself with is an attempt (either directly or indirectly) to protect the kernel. There's no reason really to restrict some obscure system call that an application doesn't use in any of its code-paths unless you're assuming the possibility of arbitrary code execution. Even then, there's not much point in restricting the obscure system call (there are plenty more useful ones for the attacker that aren't restricted) unless you're trying to reduce the attack surface of the kernel.

It should be clear that using SELinux to try to protect the kernel isn't very successful, especially in the face of remote exploits (as noted earlier). It calls into question the usefulness of the additional complexity required for the vain attempt at keeping attackers either from doing things they don't want to do that they can do in other ways, or from exploiting the kernel. It seems to me that it makes more sense to make classes of vulnerabilities unexploitable. It adds no additional complexity or burden on the user and has demonstrated itself to be more useful against real attacks. Take for instance, as I mentioned in another post, about what's been done in terms of hardening userland applications invisibly -- these kinds of changes (when implemented properly at least) are incredibly useful. In fact, the addition of a protection to SELinux (which really sticks out from its other features, and was contributed by a 3rd party) which is essentially PaX's MPROTECT feature, is actually one of its most useful protections. What's needed is more of that reality-based security, not academic pie-in-the-sky solutions. The former's been working well for years, the latter is still struggling for over a decade to be relevant.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 14:57 UTC (Thu) by hppnq (guest, #14462) [Link] (3 responses)

Why's it such a bad idea to make classes of vulnerabilities unexploitable and thus prevent someone from being able to take advantage of an applicable vulnerability for the purpose of arbitrary code execution in the kernel?

It is not a bad idea, although I can't see what you mean by "unexploitable". What I was trying to say is not rocket science, nor is it clouded in riddles.

I never talked about ruling out kernel bugs, you did.

Why on earth do you waste your time on this? Yes, I confess: I did mention ruling out kernel bugs. I invite you to read it again.

Anyway. That Usenix article about automated kernel patching was quite interesting, but also quite silly. Talk about academic pie-in-the-sky solutions. Talk about ruling out kernel bugs.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 16:05 UTC (Thu) by spender (guest, #23067) [Link] (2 responses)

The Usenix article was linked for its quantification of kernel vulnerabilities at any given time, specifically those that are silently fixed or mislabeled by vendors. That's why I specifically quoted that part in my other post; my linking to it doesn't imply my agreement with its conclusions -- I completely disagree with their conclusion/solution.

What don't you understand about "unexploitable"? Understanding that would be pretty important in determining whether the thing I mentioned helps kernel security or not, wouldn't it? You said the example I gave both helps kernel security and is not a bad idea, but neither of those things match up with what you said earlier:
1) "How would patching the kernel help against kernel bugs?"
2) "it seems a bad idea to think that any specific part of the kernel is able to protect the kernel."

So yes, what you're trying to say is clouded in riddles, because it doesn't make any sense whatsoever.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 17:58 UTC (Thu) by hppnq (guest, #14462) [Link] (1 responses)

You keep seeing things black and white. So to you, with the right kernel patch (grsecurity, I presume) in place, things become "unexploitable" at one end of the spectrum, while one vulnerablity in SCTP blows away SELinux completely at the other end of the spectrum.

What I am saying is: neither grsecurity nor SELinux will give you the security you claim they do (not) provide, unless you also seriously look at other factors. This is extremely straight forward, the Five Things To Keep In Mind point this out as well. (Your rant and vulnerability disclosure in Thing 2 shines a remarkable light on the Usenix paper indeed.)

I am not sure whether I am really unclear, or whether you really don't understand what I mean, but I think I have said enough about this now.

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 19:08 UTC (Fri) by Arach (guest, #58847) [Link]

> You keep seeing things black and white. So to you, with the right kernel
> patch (grsecurity, I presume) in place, things become "unexploitable" at
> one end of the spectrum, while one vulnerablity in SCTP blows away
> SELinux completely at the other end of the spectrum.

Brad was talking about making a *single* class of bugs unexploitable *by design* (with hardware-enforced restrictions of memory management), not about any "things" becoming unexploitable ever.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 15:23 UTC (Wed) by Kit (guest, #55925) [Link] (1 responses)

>You really should refrain from using words like "only" (especially emphasized) when talking about what arbitrary code executing in the context of a large piece of software with many dependencies and addons is limited to doing.
Did you miss where I said 'unless an additional exploit or two are also found in the limited area that the browser can actually access'? Surely limiting the surface area that an exploit could possibly happen is a GOOD thing? And the reason I said 'only' is because in this situation, if the browser is exploited, it can't just immediately copy all your sensitive data to $EVIL_HACKER then wipe your home directory.

>What files of the user the exploit could write to didn't even come into the picture.
Yes it does. The user cares about HIS data when it comes to desktop systems (which this sandbox is an attempt to help protect), and the traditional security model does pretty much NOTHING to protect that on a standard desktop. Not all systems are far off remote servers where no one ever logs in locally, they deserve security systems designed for their situations which so far the traditional systems have largely failed at.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:26 UTC (Wed) by spender (guest, #23067) [Link]

You're mixing up terminology. You used the word "exploit" which has a very specific meaning, but it seems like you're now wanting to be credited for meaning "vulnerability." When you say "unless an additional exploit or two are also found in the limited area that the browser can actually access" you're saying that there exist exploit binaries on disk which the browser process is allowed by SELinux to access and execute. In which case, I didn't miss anything at all and it's you who doesn't understand the meaning of "arbitrary code execution."

Now, if you *meant* to say that "unless there is an additional vulnerability or two in the code-paths of the kernel that a large and complex binary like a browser can reach," then we'd be in agreement.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 7:24 UTC (Wed) by tzafrir (subscriber, #11501) [Link] (2 responses)

So I downloaded a huge file in the browser and now I need to wait for it to be copied through a pipe?

Nice.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 10:07 UTC (Wed) by dgm (subscriber, #49227) [Link]

It's almost for sure that your system can pipe data faster than your network connection can provide. Several orders of magnitude faster, tipically.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 15:41 UTC (Wed) by Kit (guest, #55925) [Link]

If I take it right, you thought I meant that the file is first downloaded to a file that the browser can write to, then when it's finished read that file and pipe it to the other process? If so, that's not really how the system would work as I interpreted it.

How I see it is this: 1.) The browser wants to download a file (either by the user explicitly clicking on the link or via javascript, or whatever)
2.) The browser notifies the download service (also with the recommended filename, as well as the mimetype)
3.) The download service opens the desktop environment's normal file save dialog box
4.) The user decides where to save the file
5.) The download service tells the browser that the download was approved
6.) The browser beings downloading the data from the remote server
7.) The browser writes that data to the pipe to the download service (*not* to a temporary file or anything)
8.) The download service writes that file's data to disk

At no point is the browser having to write data to the disk, all the data is immediately being transferred over the pipe.

For more security, the browser itself could be further broken up into multiple parts, akin to how Chrome is structured... which'd help isolate the X server from the remote data (I'd imagine that the X server would probably be the weak link in this situation), not to mention having the added benefit of one tab not slowing down all the others (at least in an ideal world).

The SELinux Sandbox and small utility programs

Posted May 27, 2009 15:23 UTC (Wed) by davecb (subscriber, #1574) [Link] (2 responses)

Back in the days of mainframes, you specified the files or other resources a program was going to need in a "job control" language (JCL).

If one collects and saves the jcl for all sorts of programs, we can then use SE Linux policies to limit them to only using the resources they need, making attacks by subverting programs much more difficult. Now an attacker needs to not only modify the program, but also change an SE Linux policy.

--dave

The SELinux Sandbox and small utility programs

Posted May 27, 2009 18:57 UTC (Wed) by Trelane (subscriber, #56877) [Link] (1 responses)

could this perhaps be done through extended attributes?

The SELinux Sandbox and small utility programs

Posted May 27, 2009 19:10 UTC (Wed) by davecb (subscriber, #1574) [Link]

The label and permission data is stored in an attribute of sorts, although they're different from the user-settable extended attributes.

--dave

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 17:55 UTC (Wed) by nix (subscriber, #2304) [Link] (1 responses)

Of course AppArmor was doing this years ago.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 12:28 UTC (Thu) by nix (subscriber, #2304) [Link]

... only it wasn't because I misread. It lets you constrain things, but it doesn't let you say 'only pipe access'.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds