|
|
Log in / Subscribe / Register

Walsh: Introducing the SELinux Sandbox

Dan Walsh and Eric Paris have been working on an SELinux "sandbox" which Walsh describes on his weblog. The basic idea is to use SELinux to restrict the kinds of actions a user application can perform. This would allow users to run untrusted programs or handle untrusted input in a more secure manner. "The discussions brought up an old Bug report of [mine] about writing policy for the 'little things'. SELinux does a great job of confining System Services, but what about applications executed by users. The bug report talked about confining grep, awk, ls ... The idea was couldn't we stop the grep or the mv command from suddenly opening up a network connection and copying off my /etc/shadow file to parts unknown." Paris also posted an introduction to the sandbox on linux-kernel.

to post comments

More background...

Posted May 26, 2009 22:30 UTC (Tue) by jamesmrh (guest, #31622) [Link]

Something to note is that this was conceived partly in response to lkml discussions about expanding seccomp -- "what can we do with SELinux and sandboxing?" a couple of weeks back.

A first cut of the solution, with GUI support and Unixy semantics, is already now integrated into Fedora, via a simple policy addition to the security policy (no code changes to the kernel or userspace were required).

Walsh: Introducing the SELinux Sandbox

Posted May 26, 2009 23:43 UTC (Tue) by pr1268 (guest, #24648) [Link] (76 responses)

I dunno... It seems that the existing security mechanisms in Linux would restrict unprivileged users from doing malicious stuff without the need for a SELinux "sandbox". On the other hand, if a non-root user were running a compromised grep, awk, or ls such that the /etc/shadow file got copied to "parts unknown", then there are more pressing issues at play than sandboxing an SELinux instance. Just a mild rant from my observation from the cheap seats...

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 1:40 UTC (Wed) by gdt (subscriber, #6284) [Link] (47 responses)

It seems that the existing security mechanisms in Linux would restrict unprivileged users from doing malicious stuff without the need for a SELinux "sandbox".

Two points you may have missed in Dan's blog. (1) A finer notion of privilege than user. (2) Restriction of access to data, not only system.

For example, you might run a script to encode all your FLAC music into Ogg Vorbis. That script will run as you, pr1268, so traditional Unix access control gives the script access to all of the files marked as owned by pr1268. That is far too much access -- that re-encoding script should not be able to do anything other than transform the input to the output. It certainly should not be able to maliciously use the too-wide Unix access, such as encrypting all of pr1268's photographs and demanding a ransom in return for the decryption key.

As SELinux policy stands today, that sort of malicious act is not prevented. Development of SELinux to date has focussed broad policy and on protecting the system, not on protecting user's privacy and data. These are the next frontiers for the Linux MACs -- I quite like Dan's phrase "policy for the little things".

The "sandbox" is a tight policy for the smallest thing -- a piped command which transforms its input.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 1:59 UTC (Wed) by gdt (subscriber, #6284) [Link] (43 responses)

Sorry, one other thing. The traditional Unix attitude to that ransom-demanding script is, "too bad, they've got root, game over". The point of SELinux is to say "you've got root, but you still don't get to win".

The focus with SELinux to date has to been to say "no" early enough so that no actual compromise of the machine by the root-obtaining exploit has succeeded.

What is starting to happen now is more interesting, which is to secure the privacy and integrity of data in a hostile environment.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 3:13 UTC (Wed) by spender (guest, #23067) [Link] (42 responses)

5 things to keep in mind:

http://invisiblethingslab.com/pub/xenfb-adventures-10.pdf
http://marc.info/?l=dailydave&m=117294179528847&w=2
http://kernelbof.blogspot.com/
http://www.immunityinc.com/documentation/cloudburst-vista...
http://www.usenix.org/event/hotos09/tech/full_papers/arno...

I'm assuming the "no actual compromise [...] has succeeded" part of the above comment was a typo. Given that "an attacker seeking to exploit unidentified vulnerabilities in Linux bug-fix disclosures would have [...] between 4 and 16 bugs with hidden impact waiting for him or her at any time in the last three years", it might be a good idea to put some focus on improving the security of the kernel itself, upon which the integrity of these "privacy and integrity" protectors depends.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 3:35 UTC (Wed) by JoeBuck (guest, #2330) [Link] (21 responses)

So you're saying, instead of trying to limit the damage of root exploits, eliminate every possible root exploit instead? It's not going to happen, not unless kernel hackers stop all development other than security bug fixes.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:14 UTC (Wed) by spender (guest, #23067) [Link] (20 responses)

That's not what I'm saying. Also, you made the unsubstantiated assumption that putting "some focus on improving the security of the kernel" means "security bug fixes" and then attributed that assumption to me so that you could dismiss it (straw man).

What I'm saying is, I hope the people pushing SELinux get out of the habit of throwing around phrases that exaggerate the effectiveness of SELinux or suggest that it provides a guarantee of anything. I made the same request years ago when I published my local root exploit that disabled SELinux (and every other LSM security module) -- that apparently had no effect on them, nor has the recently published remote root + SELinux disabling exploit.

There's no better demonstration in my mind of the importance of improving the kernel's security than the fact that it was possible to disable SELinux remotely with a single 4-byte write. A proper response to the exploit should have been to look at the techniques used (which surely will be duplicated in future exploits as the code is public) and see what could be done to prevent them. Instead it was 'business-as-usual' for the vendors: just fix the remotely exploitable vulnerability that we classified as a denial-of-service (as is done with nearly all exploitable vulnerabilities for which there doesn't exist public code exploiting them), hope nobody makes a fuss about it, and move on.

Maybe I haven't looked hard enough, but I can't find any article from a vendor promoting SELinux that discusses the reality of what it can do. I just did a random google search for SELinux MLS and came up with:
http://www.centos.org/docs/5/html/Deployment_Guide-en-US/...
It's a document written by Red Hat talking about how SELinux can be used to keep untrusted users with access to unclassified information from ever being able to access top secret information. Given the above mentioned remote root exploit, using SELinux for such a purpose is both frightening and irresponsible. I'm not exaggerating at all here; here's an entire paragraph straight out of the article:

"Some organizations go as far as to purchase dedicated systems for each security level. This is often prohibitively expensive, however. A mechanism is required to enable users at different security levels to access systems simultaneously, without fear of information contamination."

This irresponsible mindset isn't confined to SELinux: there are others who erroneously believed that information of different security levels could be compartmentalized by using virtual machines. It should be clear from the above mentioned exploits for Xen and VMWare that this view is just as foolish.

As a good contrast to the "claim it's provably secure and that x can never read/write y" approach to security features, I recently spotted Kees Cook's blog at:
http://www.outflux.net/blog/archives/2009/05/14/nx-emulat...
where he was talking about adding exec-shield to Ubuntu, while calling it NX emulation in the case where hardware NX support wasn't present. I pointed out to him that this was misleading, and gave several examples of how it provides substantially weaker protection than a hardware NX implementation. He quickly verified my claims, and furthermore found what is likely another vulnerability in exec-shield which allowed for executable heaps (though I haven't verified it myself). After this, he made the responsible decision to call it "partial NX emulation" instead.

A reasonable view like that is what's needed in Red Hat's article above. Instead of offering software as a replacement for air-gap and suggesting there's "no fear of information contamination," the caveats of the approach should be mentioned (any at all, there are none whatsoever discussed in that article). You'll save the cost of hardware and maintenance of a machine, at the potential cost of the information residing on the machine.

To wrap this up, back to what I mentioned earlier about improving the security of the kernel and how this doesn't have to do with fixing individual bugs. If you look at what's been done in terms of preventing exploitation of security vulnerabilities in userland (NX, ASLR, PIE, RELRO, SSP, etc) similar things can be done for the kernel. Kernel hackers don't have to stop all development, the changes are transparent to the user, and the protections apply to both current vulnerabilities and vulnerabilities introduced in the future.

Some of PaX's features are already tackling these problems, and continue to improve (in fact, an upcoming change to PaX will defeat the technique used by sgrakkyu's remote root SELinux-disabling exploit to easily switch from interrupt context to process context). Among the already existing features (in addition to security-improving changes for which no configurable option exists) are KERNEXEC, MEMORY_UDEREF, MEMORY_SANITIZE, REFCOUNT, RANDKSTACK, and USERCOPY.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 6:44 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

So... let me get this straight. To you this entire argument isn't about
whether SELinux is an effective sandbox (it is, modulo kernel bugs, and
local attackers might be impeded somewhat by having to do all their work
through a pipe that getting local access might be harder than before: your
requirement for total kernel security is ridiculous on its face and
counter to the security philosophy that you espouse elsewhere in the *same
message*, of strength in depth). It isn't about how much ease of use it
brings over plain unadorned SELinux, if any (which is what the article was
actually about).

It's about the *phrasing of the release announcement*?! Do you seriously
think that so many people are going to read it and use the sandbox code
(as opposed to, say, picking up F11 and getting it by default without
reading that announcement at all) that what they think after reading the
announcement will make *any* difference to security?

Do you seriously think, after twenty-plus years of viruses, that *anyone*
believes *any* vendor's claim that *anything* is totally secure? (In any
case, that assertion was not made in this case: 'more secure' than an
SELinux that guards only system daemons it surely is). The public are not
idiots and aren't going to be reading this release note in enough numbers
to affect security in any case.

You have a nerve talking about straw men when your entire argument is
based on a misreading and you contradict yourself in a single post.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 12:16 UTC (Thu) by spender (guest, #23067) [Link] (3 responses)

Yes, the argument is mostly about phrasing, because despite your inability to accept it, it does matter. It's not just about this single announcement, but rather about *every* announcement, discussion, or documentation about SELinux. You can see how effective the propaganda is when you look at how SELinux is discussed by users. Look at how often words like "proven" or "can only" are thrown around when talking about SELinux -- you yourself used "proven" multiple times. Do you use these same kinds of phrases when talking about NX or ASLR? It's quite silly to say a technology is "proven" or that it makes sure a process "can only" do something, modulo [list of things that bypass it].

If people aren't buying the propaganda, as you claim, then why is SELinux being offered as a replacement for air gap among people who don't know any better. Either no such users of the air gap replacement exist (which reality says is false), or they do exist (they do). Are you claiming that putting unclassified data on the same physical machine as top secret data is a good idea and a person doing so can be "without fear of information contamination"? Those aren't my words, they're Red Hat's -- and whether you want to believe it or not, people *are* buying it. I'd really like to hear your answer to this question, actually.

Nowhere did I make a requirement for "total kernel security" -- I simply made the point that you can't claim guarantees or "proven models" unless there actually does exist some guarantee or proof, which in this case there does not and cannot.

You failing at reading comprehension and putting words into my mouth does not a contradiction make.

-Brad

I'll give you credit, but how much?

Posted May 28, 2009 17:39 UTC (Thu) by hozelda (guest, #19341) [Link] (1 responses)

I know few real details of the Linux kernel or of SELinux, but what Brad criticizes in these comments, at least on the surface, seems completely correct.

To be precise, if SELinux is ever changed or if any part of what SELinux depends on working properly ever changes in behavior, then one has to redo the "proof". Maybe this proof redo will be easy for most changes that happen in practice, but maybe not.

In any case, what is the code that must be audited and proven correct for SELinux to work as intended? That would have to be identified and locked down so that any weaknesses could not be introduced by changing anything else. [Eg, if hacking some part of the kernel with the sole intention to compromise the system is done, then that should not affect the "SELinux promise" unless that part being hacked was included as being needed to be locked down.]

I suspect there are too many assumptions being taken by those that would say SELinux is bulletproof. Software is not hardware. Has the hardware and the hardware creation processes and tools been audited?

Now, in practice, it would be a long list to document all the (eg, hardware) assumptions needed for SELinux to work as well as, eg, we might believe is the alternative of isolating data on different machines "not connected" with each other.

Then again, have we proven how many vendors sell a solution where they prove all of their product's requisites? There is an awfully lot of physics that must be documented or at least the assumptions stated. And this says nothing about physics we can't know we don't know about. Truly I don't believe anything is "proven" unless you talk within a limited context. We don't need a degree in philosophy to keep finding potential shortcomings with any system claimed to be perfect.

In short, I think the assumptions should be stated, if not on the glossy brochures main pages, at least on a little comment somewhere. Then again, most people that would identify some of these issues would challenge any vendor claiming some system was provably perfect, in order to weed out the actual assumptions being made. Brad might be going too far and might be playing devil's advocate ad infinitum, perhaps picking on Linux or FOSS because he has allegiances elsewhere. [?]

From what I have heard, the vast majority of the US government or likely virtually 100.0% [note an implied margin or error] of all other institutions out there do not have such high requirements. In these case, a risk analysis would probably reveal weaker links than an SELinux. In these cases, some of the hidden assumptions might be known or else taken for granted no matter the vendor.

Now, let's move the spotlight to Brad completely to wrap up this comment. Does Brad know of any provably secure system, for example? Has his best choice (not counting SELinux) stated their assumptions fully? Could I get access to these assumptions and the proofs? [eg, to the full blueprints for how such a system could ever possibly behave (given its assumptions) with a full analysis of what was believed to be the pertinent physics? And some analysis of what would happen if any of the assumptions were found not to hold?]

Are we all putting things into perspective AND trying to be honest about assumptions, or is perhaps each side failing? When you talk proving security, only the most open of systems could ever honestly attempt to engage in that conversation. Does a system that is really open exist (at hardware, software,... all levels)? Does it even make sense to speak of proofs when referring to physical systems?

I'll give you credit, but how much?

Posted May 28, 2009 20:40 UTC (Thu) by spender (guest, #23067) [Link]

I'll try to answer all the questions you asked in your post.

As for what code needs to be audited for SELinux to work as it's claimed to, that's not really useful in reality, as:
1) Development on the kernel would have to stop (or the code would have to be audited constantly, which isn't cheap)
2) Auditors aren't perfect
3) The problem is architectural; as James said further down on this page (the first time I've seen an admission like this from anyone at Red Hat, but my memory may be wrong): "SELinux cannot be expected to protect against kernel vulnerabilities, because it is part of the kernel."

Indeed, most vendors exaggerate the abilities of their products. The most egregious example I can think of right now is anti-virus software; but being as the parent article is about SELinux, I talked about SELinux (with some mention of Xen and VMWare as well).

Regarding "Brad might be [...] perhaps picking on Linux or FOSS because he has allegiances elsewhere. [?]", as I (and the PaX team too for that matter) have spent just about every day for the past 8 years working on free software to improve Linux security, that's a little insulting -- but I imagine you weren't aware of that (no problem).

I don't know of any provably secure OS -- surely it wouldn't be anything like the OSes people actually use if one did exist. It's not really useful to entertain this idea; what's useful is thinking about how to improve things in ways that don't add complexity or burden on a user. Achieving higher levels of security is useful, raising the cost of developing an exploit by making exploitation techniques more difficult and complex is useful.

It's especially important now with the 2.6 development model (moreso than it was with the 2.4 series of kernels) that greater considerations be made for security in the kernel itself. I've mentioned this on previous articles, but it goes without saying that when you have a ~80MB patch of new code/changed code/removed code every 3 months, there are a lot of vulnerabilities being introduced. It's been made clear by the kernel developers that there's no intention of changing the development model, so instead of just accepting the problem and continuing in the "fix vulnerabilities that get reported to us, release a 'stable' update once a week" mindset, something more can be done.

You mentioned "weaker links than an SELinux" -- I don't know if there was a typo involved there, but I don't want to give the impression that SELinux is a weak link. Considering the length of time it's been around, its code quality has been better than most, with only a handful of vulnerabilities reported (though some were silently fixed, like the remote DoS from 2005 (I think) that I mentioned several months back. It was discussed among the vendors on a private list, noting that attacks had been seen in the wild, but still no CVE or announcement was released -- I assume because at the time the attacks had been discovered, the bug had already been fixed). Restricting what files, etc a process can access is useful too as not all vulnerabilities involve memory corruption/arbitrary code execution. In that respect, it's a good complement to the NX+ASLR protections already in place. But again, it's important to realize the limitations of the technologies so that levels of risk can be properly evaluated.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 18:01 UTC (Thu) by hozelda (guest, #19341) [Link]

>> If people aren't buying the propaganda, as you claim, then why is SELinux being offered as a replacement for air gap among people who don't know any better.

Who do you believe doesn't know better that was addressed by Red Hat? And since when has the air been that good at preventing information from propagating?

Most customers fairly serious about security who would take any vendor's word on perfection would seek out certifications "just to be safe." [And note that many customers have claimed to value security yet have taken the word of vendors that keep their systems closed source!]

And even with certifications, look at what can happen: http://cryptome.org/ed-curry.htm . Customers may "want" security, but most apparently speak a different language when it comes to making purchases so are willing to trust vendors and look the other way.

Anyway, for the sake of approaching intellectual honesty, the SELinux vendors (and everyone) should consider telling the whole story [and closed source vendors should stop trying to push their wares on security conscience buyers since they aren't even willing to meet the basics of transparency and peer review].

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 18:57 UTC (Thu) by hozelda (guest, #19341) [Link] (1 responses)

Brad, I want to repeat, so that it's clear, that I agree with your position of not letting the largest (or any) vendor selling Linux get away with making outrageous claims; however, I need to make sure a balanced view comes out.. and I want to look a little closer at your "evidence".

Well, to add balance, I want to again state that no vendor should step up to the plate if they won't at a minimum open up their blueprints. Red Hat may talk however they want, but I believe they are putting all their cards on the table for the customer and the world to verify as much as they want. This is in stark contrast to the many vendors that try to hide exclusively behind pulling off a certification (all tests can be beaten) or worse.

[You quoting] >> Given the above mentioned remote root exploit, using SELinux for such a purpose is both frightening and irresponsible. I'm not exaggerating at all here; here's an entire paragraph straight out of the article:

[You quoting] >> "Some organizations go as far as to purchase dedicated systems for each security level. This is often prohibitively expensive, however. A mechanism is required to enable users at different security levels to access systems simultaneously, without fear of information contamination."

That paragraph concludes, "A mechanism is required to enable users at different security levels to access systems simultaneously, without fear of information contamination."

It does not conclude that SELinux version anything.anything on arbitray hardware is nirvana. That paragraph makes no allegations about a product, or for that matter about any model.

In fact, that webpage ends with the following [note the tone and subject]:

>> Efforts are being made to have Linux certified as an MLS operating system. The certification is equivalent to the old B1 rating, which has been reworked into the Labeled Security Protection Profile under the Common Criteria scheme.

I don't think Red Hat is claiming Linux is the Second Coming as it almost appears you allege they are doing.

[You said] >> A reasonable view like that is what's needed in Red Hat's article above. Instead of offering software as a replacement for air-gap and suggesting there's "no fear of information contamination"....

Yeah, unless you point to more "evidence", I am thinking that you misunderstood or misjudged that webpage.

I didn't see Red Hat claiming their product was perfect. The article even refers over and over to models, implicitly by using general terms, and explicitly by actually using the word "model" as it compares.

The entire discussion is very general in details. It is nothing remotely like a proof. It does not at all address the issue of imperfections in implementation or make any claims in this respect.

I already mentioned the certification effort underway (according to the webpage). I do very much doubt ANY certification authority will ever carry out an infallible proof or exhaustive testing, or could be trusted to do so without independent verification.

To conclude, I think you are overestimating the claims that Red Hat is presumably making. It's clear from the type of discussion that they are not offering any claims for an actual product. What they offer is a high level description of the model upon which their product is based. They offer the source code so the customer and the world has an opportunity to separate hype from reality. They possibly offer a certification. It's notable to mention that few vendors offer the entire buildable source code so that others can check up on the glossy hype. Unfortunately, I wish the open source community was advanced enough to be offering the source code to hardware.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 20:07 UTC (Thu) by hozelda (guest, #19341) [Link]

I spotted an opportunity to make a small apology/fix.

The page being discussed http://www.centos.org/docs/5/html/Deployment_Guide-en-US/... does appear to be from a manual that is for an actual product.

Not too much of what I said changes, but let me restate some things.

[I said] >> The entire discussion is very general in details. It is nothing remotely like a proof. It does not at all address the issue of imperfections in implementation or make any claims in this respect.

The discussion is about models mostly and is very high level. However, by being within a manual for a product, unless they clarify elsewhere (see last paragraph of this comment), it can be argued they are representing the product at least at some level.

The page says, "SELinux uses the Bell-La Padula BLP model...."

[I said] >> To conclude, I think you are overestimating the claims that Red Hat is presumably making. It's clear from the type of discussion that they are not offering any claims for an actual product. What they offer is a high level description of the model upon which their product is based. They offer the source code so the customer and the world has an opportunity to separate hype from reality. They possibly offer a certification. It's notable to mention that few vendors offer the entire buildable source code so that others can check up on the glossy hype. Unfortunately, I wish the open source community was advanced enough to be offering the source code to hardware.

Well, they are making claims about a product indirectly, but I think context should show that it is about the general behavior of the product and not a statement that it has been proven that the product behaves as the model described in all circumstances without any exceptions.

I think engineers would read such a high level documentation and recognize that the documentation is the intended behavior and not a promise that the software will abide to the models described therein to perfection.

Red Hat likely disclaims many things in the actual contracts they sign with their customers and customers. I am not familiar with the contracts vendors put forward to be able to compare and contrast them.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 19:31 UTC (Thu) by hozelda (guest, #19341) [Link] (12 responses)

>> What I'm saying is, I hope the people pushing SELinux get out of the habit of throwing around phrases that exaggerate the effectiveness of SELinux or suggest that it provides a guarantee of anything. I made the same request years ago when I published my local root exploit that disabled SELinux (and every other LSM security module) -- that apparently had no effect on them, nor has the recently published remote root + SELinux disabling exploit.

I see two things.

One, Red Hat talks about a model and you talk about an implementation. Your exploits are about bugs and not about the model.

Ironically, I'm fairly sure you aren't even talking about a specific implementation (of a full system or of software) since you mention an exploit from years back, yet talk as if the new system is the same one.

Further, you talk about SELinux as being a level of security and not being the whole system ("disable SELinux"). Using that interpretation, you didn't even show a flaw in the SELinux implementation. You showed a flaw in the software that enables (turns on or off) the SELinux security.

>> Instead it was 'business-as-usual' for the vendors: just fix the remotely exploitable vulnerability that we classified as a denial-of-service (as is done with nearly all exploitable vulnerabilities for which there doesn't exist public code exploiting them), hope nobody makes a fuss about it, and move on.

Do you know of any vendor that make a "fuss" about every bug they fix?

Do you know of a vendor that makes a greater deal about fixing bugs than does Red Hat and, in general, the open source world?

Most other vendors don't even make the slight attempt at "fuss" by showing the world the bug they just fixed. Red Hat and company make a certain fuss about every single bug they fix because they take the time to document them all publicly and keep this ongoing information public as they "move on" to the next job at task.

>> This irresponsible mindset isn't confined to SELinux: there are others who erroneously believed that information of different security levels could be compartmentalized by using virtual machines. It should be clear from the above mentioned exploits for Xen and VMWare that this view is just as foolish.

You should have continued talking about every other virtual machine on the planet.

In fact, mentioning VMWare is redundant since we already know we can't trust closed source companies or their binary-only products.

And speaking of not being able to trust closed source companies (eg, Microsoft) or their products...

>> Some of PaX's features are already tackling these problems, and continue to improve

What in the hxxx is PaX?

I went over to the pac grsecurity etc site. I can't believe you allow (broken) links on your site that possibly suggest you can (a) secure OR (b) independently implement Windows (the latter being an illogical statement).

It's difficult to take you seriously (assuming I was still taking you seriously). Do you honestly think anyone working outside of Microsoft has a shot (a shot, if perhaps even lower than one google to one) at securing Windows?

I am hoping I misunderstood the low content information bits I read on that site and that you will clarify just what statement you are trying to make with respect to anyone being able to secure Windows or else re-implement it (which is a nonsensical statement anyway).

[I'm assuming you own that site. If not, then can you still address these concerns I have since you appear to know a lot about this PaX?]

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 20:19 UTC (Thu) by hozelda (guest, #19341) [Link] (7 responses)

[I said] >> One, Red Hat talks about a model and you talk about an implementation. Your exploits are about bugs and not about the model.

Same caveat applies as discussed above: http://lwn.net/Articles/335115/

[I said] >> I went over to the pac grsecurity etc site. I can't believe you allow (broken) links on your site that possibly suggest you can (a) secure OR (b) independently implement Windows (the latter being an illogical statement).

I thought, at the time I wrote the above comment, that the wording on the website was a bit misleading, but if I'd know what PaX was, I might agree that the comments about "Windows .. implementation" on the website refer to PaX and not Windows itself. My bad, I think.

I still want to know if the intention was to suggest that Windows can be secured in any way by a third party.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 21:45 UTC (Thu) by spender (guest, #23067) [Link] (6 responses)

If you want to know what PaX is/does, there's the bunch of documentation on the first link on the page: http://pax.grsecurity.net/docs/
Additional information is in the configuration help for each of its options in the patch itself.

As for the Windows comments in your post(s), I don't see what's so shocking about saying certain third parties have implemented some of same techniques implemented in PaX. Take WehnTrust for instance, which implemented ASLR on Windows (even implementing RANDEXEC through its own version of vma mirroring used in PaX). The source is available -- it's actually a nice piece of work, considering that it's more difficult (but more interesting/rewarding) to implement security in Windows than Linux as you have to get around the problem of not having any source. The person who wrote it now works for Microsoft, which brings me to my next point: the anti-Microsoft view you have of their security is pretty outdated. They've actually been taking security seriously for some time now (which I can't say for the official policies of Linux kernel developers) and employ a large number of really bright security experts (like Matt Miller, the WehnTrust author).

At the same time, it's obviously true that some (or most) of the third party security improvements for Windows claim more than they're actually capable of. There was a particular product I won't mention the name of that claimed detection/prevention of "ret2libc attacks". Since noone else has actually solved this problem yet, I was curious to see what the "protection" entailed. It turned out that the software was just checking to see if the return address for an API call (only the ones it cared about enough to hook) pointed into the stack or to a function prologue -- not true detection/prevention, as it can be worked around by techniques that have been public for years.

I don't envy the monstrous task Microsoft has of improving their security. Any improvements have to be done in such a way that doesn't break application compatibility. Given the amount of software on Windows, most of which only exists in binary form and where many of the authors are long gone -- it's no small feat. In some cases their improvements turn out to be unpopular (UAC) or they have to sacrifice some additional security. This is no different from any other major vendor though -- Red Hat sacrificed some additional security in their Exec-Shield implementation in the name of perceived application compatibility.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 1:01 UTC (Fri) by hozelda (guest, #19341) [Link] (5 responses)

Hey Brad, I got a "you are embarrassing yourself" email from someone providing almost no details. You can probably guess how much weight I give emails that are FUD-secretive in nature. [It's my open source mentality, I think.]

In any case, this event reminds me that I am willing to reply on specific points without feeling the obligation to become an expert on every facet of the discussion. This does mean I might misfire, but I encourage anyone to always try to correct errors in postings that bother them enough. This helps the author and the readers coming after that may also not be experts either.

Excuse me if I was harsh and am not aware of all the good work you and anyone else may have done.

>> As for the Windows comments in your post(s), I don't see what's so shocking about saying certain third parties have implemented some of same techniques implemented in PaX.

Nothing wrong. What happened is that I became suspect over your motives and realized that, regardless of your motives, you were providing material that others out of the spotlight could point at to say, "see all that controversy surrounding Linux/SELinux". Taking everything into account, I thought maybe PaX was a product intended to compete directly with SELinux and had to ask just how a third party could ever believe they could solve Microsoft's problems for Microsoft. Some of their "problems" are based on business decisions which involve protecting monopolies -- eg, by keeping source closed and by making it difficult to have the actual interfaces fully resolved by third parties.

Microsoft can certainly try to leverage PaX. We just can't say nearly as much about the final product Microsoft comes up with without having access to the full source.. A single line of code, or a single module certainly, can bypass apparent security. The task of reverse engineering is an art. The test space is virtually infinite and you have a much longer less precise task ahead without the benefit of source code (including of compilers, etc). Some of the prodding is even illegal according to their EULA, meaning that the number of people even attempting such a task will be further limited.

>> considering that it's more difficult (but more interesting/rewarding) to implement security in Windows than Linux as you have to get around the problem of not having any source.

Just want to mention that interesting in one sense is not interesting in another.

>> The person who wrote it now works for Microsoft

See, even that individual probably preferred to see Microsoft's source. I mean there is a reason the FOSS world shares source. "Interesting" is great and all, but..

..show me the source.

>> the anti-Microsoft view you have of their security is pretty outdated

I did not intend to appear to mock their past (too much). I said that *all* closed source vendors are untrustworthy. This is an academic pov. It's like the example you gave where someone made outrageous claims but kept the magic secret.

Put up (the source code) or shut up is my reply to anyone that wants trust, especially when the software is that complex. Open review is the judge.

In Microsoft's case, seeing some code (let's assume we get access) doesn't convince me an iota that this is what they are shipping, so while I could critique, I would not for a second believe I was actually getting the information I would need to make a security decision in the way I could if I had source.

I don't trust closed source (that I can't change and build to verify). I might run the binaries and accept certain risks, however, but that is a different issue to having the confidence the comes only from open source code.

>> They've actually been taking security seriously for some time now (which I can't say for the official policies of Linux kernel developers)..

That's an unfounded statement. Linus and many Linux developers have taken security more seriously than many that will ever pass through Microsoft.

You can't just start taking security seriously after so much has been invested and pretend you care more about security than those that have taken it more seriously from the start.

Microsoft has a long history of deceptive marketing shrewdness. Their engineers are not the ones making statements on behalf of Microsoft. In fact, their engineers don't make the final decisions in any way.

>> and employ a large number of really bright security experts

I would bet a farm that Microsoft cannot match the number of really bright security experts that currently do or will work on Linux. [Academic institutions, private security researchers, major companies with lots of expertise on staff.. etc, all have access to Linux but not to Windows.]

>> Any improvements have to be done in such a way that doesn't break application compatibility.

Every update Microsoft sends out breaks something, generally speaking. Vista was a mess, but you don't have to be that obvious in one large shot to know that lots of things can break when you don't test for them (conduct analysis, etc). Fixing a bug in Windows will break software that was written with that bug in mind, for example.

Microsoft's interest is to their stockholders, not to the users. They don't give users the benefit of source code, for starters. The users have the source to Linux and their own best interests in mind.

Red Hat is kept in check (so long as they stick to their model) by the public, by the wide open source community. Who keeps Microsoft in check? Certainly not the public.

Closed source is magic. I don't trust it.

And everyone who has gone to work at Microsoft apparently also preferred not to take Microsoft at their word but rather wanted to see some source (of course, mere employees cannot expect to have control and knowledge over the final product because of access restrictions and inability to privately verify the bits through their personal compilation checks).

>> In some cases their improvements turn out to be unpopular (UAC) or they have to sacrifice some additional security. This is no different from any other major vendor though -- Red Hat sacrificed some additional security in their Exec-Shield implementation in the name of perceived application compatibility.

You still can't compare what you can't see with what you can.

[Broken record again: You don't know what Microsoft ships. You don't know their bugs or what process gives the final pass over their bits. Ditto for all their updates.]

To finish, I don't intend to be harsh to you. My beef, primarily, is with those that attack FOSS unjustly, eg, with whom I like to call Monopolysoft, with their dirty tactics, false promises, and lack of transparency.

[FWIW, they keep cutting back and losing quality individuals. Customers relying on them, a shrinking number, might not be thinking long term in putting all their eggs into a single basket.]

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 2:33 UTC (Fri) by spender (guest, #23067) [Link] (2 responses)

A couple comments:

As mentioned on the PaX site, Microsoft has DEP (their HW-based PAGEEXEC implementation) and (a weak) version of ASLR. I had remembered reading that Server 2008 would prevent a service from respawning for a period of time if it had crashed multiple times within a short period of time (deterring bruteforcing of ASLR -- something vanilla Linux doesn't have yet) but when I went to look for a link about it last, I couldn't find one.

EULAs don't stop security researchers from doing their work, nor does not having the source. Reverse engineering has advanced quite a bit in the past couple years, as well as the state of art in decompilation. Tools like BinDiff makes vulnerability finding in patches much quicker, while the Hex-Rays decompiler does a decent job of turning x86 assembly into C-like code. The Windows implementations of DEP and ASLR are pretty well documented publicly. Several companies have published papers on them and techniques for defeating the protections (for instance, it took a while for both Microsoft and Linux vendors to figure out that having a make_stack_executable() function without any safeguards wasn't such a good idea).

I sometimes wonder if there's a 'bystander effect' or 'free rider problem' involved with the "many eyes makes bugs shallow" view. There's a lot more people having a feeling of security based on the assumption that other people are reviewing the source instead of them, then there are actually people reviewing the source. The reality is, even though you have the kernel source, I'm pretty sure you don't audit it for security vulnerabilities, and if you did, it'd be impossible for you to audit every single line of it yourself. So in the end, you're putting trust in some entity, whether it's Microsoft or Linux vendors. It's naive to think Linux vendors are somehow unlike Microsoft in that they don't care about their stockholders -- everyone's in it for the money.

I wouldn't bet the farm anytime soon (though you've restated the challenge, which was initially Microsoft employees vs employees of Linux vendors, collectively) -- I know quite a number of people in the security industry. I can name several exceptionally bright security experts working for Microsoft, but can only think of one among the Linux vendors, who works for SuSE. Indeed, there are other bright people in the industry that work on Linux security -- Julien Tinnes and Tavis Ormandy come to mind, but neither of them are employed by a Linux vendor. To me, that says something. After all, security should should be the responsibility of the vendor, not Google; just as it's Microsoft's responsibility, not Mcafee's or Symantec's. Also, as far as the commercial security industry goes, there's far more people looking for vulnerabilities in Microsoft's software than there are looking at Linux -- simply because exploiting Microsoft software is more profitable (the same goes for underground exploit writers).

Regarding:
> That's an unfounded statement. Linus and many Linux developers have
> taken security more seriously than many that will ever pass through
> Microsoft.
you've apparently missed several past discussions both on here and LKML. It's pretty much agreed upon by the security industry that the official policy of the kernel developers (intentionally covering up security vulnerabilities by fixing the bugs without mentioning security impact that they're aware of) is damaging to users. Linus was nominated last year for the "Lamest vendor response" award: http://pwnie-awards.org/2008/nominees.html#lamestvendor

You mentioned not knowing about what Microsoft ships in their updates -- in fact, the security industry has a pretty good idea about what's being shipped in the updates. A huge number of exploits are written based on diffing an old binary with a newly patched binary. Though silent fixes could easily work their way into a service pack, it's harder to work one into the small patches that go out each month. The patches that are released are marked as security fixes. Sometimes obviously they get their security classification wrong and rightly get called to task for it. Of course, they seem to do a better job of it (again, this goes back to why it's important for the Linux vendors to have security experts on staff) than the tendency to call most vulnerabilities in the Linux kernel just "denial of service", when they're exploitable in reality. This has been commented on several times by several security experts. Three examples:
http://kernelbof.blogspot.com
http://www.security-express.com/archives/bugtraq/2006-07/...
http://securitywatch.eweek.com/exploits_and_attacks/inter...

> Closed source is magic. I don't trust it.

In summary, everything is magic -- verify ;)

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 3:10 UTC (Fri) by spender (guest, #23067) [Link]

Small update:
Though I still can't find the original link I read where it discusses "windows service recovery" as a useful/necessary addition to ASLR, I did find: http://technet.microsoft.com/en-us/library/cc262589.aspx
which shows how it's configured. By default in Windows 2003 and up, a service is allowed to crash 3 times, with a minute in between crashes. After the third crash, the service won't be restarted (thus deterring an ASLR bruteforce).

Given the recent discussions on LKML of quality of randomness for ASLR, I'm surprised no one brought this up. After all, randomness quality isn't of huge concern when nothing stops you from running your exploit as many times as it takes to get the addresses right. Nergal wrote his segvguard module years ago when he wrote his classic ret2libc paper, and similar code has been in grsecurity for years as well. Even if a kernel solution isn't used, it seems like it'd be a good job for a TCP wrapper like xinetd. That still doesn't help suid binaries, though.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 5:03 UTC (Fri) by hozelda (guest, #19341) [Link]

The EULA does slow down work.

I also think you might be a little over-enthusiastic if you were suggesting that the reverse engineering jobs being undertaken will rival source code access. Just consider the amount of work produced per unit time by the FOSS world vs. the "useful" work being achieved by the underground or by the research world focused on Windows. Clearly one type of work is much more difficult (ie, the one without source code access).

You have high esteem for the top researchers but perhaps miss out on the value and contributions from those not known in the industry. Not to drag SELinux back into this, but a bug fix can have many repercussions, including fixing security issues in the making (not yet analyzed by black hats or even by expert white hats because of information overload). Many people work on their concept of correctness and don't stop to categorize something as security related.

Remember that it takes much more time to develop an exploit usually than it does to fix the bug. With two competent parties analyzing code, which is more likely to finish first, on average?

In fact, you get something of a race to reporting bugs in the FOSS world (to at least get credit) because it is so much quicker to do this than to build an exploit. In the closed source world, you have lots of time to build the exploit with much time left over to craft a business plan around the series of exploits you will build.

Also, the number of eyeballs matters. There is a reason FOSS is quick to fix problems and many times anticipates them, one way or the other.

Of course, if the reverse engineers had done their jobs (assuming it was easy), then they would be patching Microsoft's software for Microsoft just as quickly as is done for FOSS. [I was being facetious.]

You mentioned all the work being done for Windows. Well, I think as time approaches infinity, we see people moving off Windows and onto the more transparent Linux. Research advances more when you have access to the most information possible and then try to fill in those fewest number of remaining holes. This obvious conclusion is among many not escaping those slowly but surely moving to support and contribute to FOSS.

Mainly with Microsoft software in mind, I say that you don't need to (eg) overflow a stack to subvert security or disrupt privacy. You can do it right under people's noses, and it is difficult to catch this because nothing unordinary is happening among an ocean of more unordinary stuff flashing by at hundreds of millions of ordinary instructions per second or faster.

In short, I am not going to be convinced that the reverse engineering job is keeping a check on Microsoft no matter how many tools you mention, nor am I going to be convinced that reverse engineering is comparable to having source. It's one thing to find flaws (diff to help you out, etc). It's another to verify things are correct or are not misbehaving in places you haven't looked. Finding a gem vs accounting for everything in the land. [And surely, the tools don't get as much information as source code+binaries]

To belabor the point.. RE leads to much more information overload. You filter out much to focus on things of interest. If 80MB source patch for Linux is tough to follow.. if everyone is trusting everyone else to see source as you stated.. then how much more difficult is the case when you don't have source yet have lots of changes in binaries. Remember that binaries are updated while the system is running with no need to announce what is going on. The system software can piggyback on any network conversation in any way it wants. It's too large a space to track which is why just looking at the network is not enough, and the binaries clearly have not been deciphered completely. Spotting problems in diffs is much easier than understanding the base. How well does the research community know the Windows base? I don't think nearly well enough. [Eg, you invoke things that Microsoft themselves have stated about their products rather than pointing to an analysis of their binaries to actually show/prove X or Y property is there. With sarcasm.. Don't the reverse engineers know all about Microsoft products the way we know about Linux? Why can't we point out where Microsoft does this or that in actuality (not in theory nor in their marketing nor in their purposely scrapped projects), as we can for Linux, but instead rely on their cribnotes to the public? What kind of checks are those? Who cares that these dissembling tools have progressed "hugely" recently?]

As concerns Red Hat being a public corp just like Microsoft, I addressed that already by saying that the issue is about checks and balances (we look towards their businesses for that.. one relies on openness the other on closedness) and additionally that Microsoft has monopoly profits to protect.

Walsh: Introducing the SELinux Sandbox

Posted Jun 6, 2009 11:07 UTC (Sat) by willezurmacht (guest, #58372) [Link] (1 responses)

When a person jumps into a discussion which covers topics he isn't particularly knowledgeable of, it's a matter of humility, good manners and taste to avoid making claims and dogmatic statements, like you've done so far.

But, if you are in for being the comic relief here, I welcome you as our special open source Leeroy Jenkins. Neither Brad nor the PaX Team know what they are talking about! Go get 'em, tiger!

Walsh: Introducing the SELinux Sandbox

Posted Jun 6, 2009 16:35 UTC (Sat) by dlang (guest, #313) [Link]

exactly what knowledge do you have of the posters qualifications that makes it so clear that you should ridicule him/her that way?

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 22:31 UTC (Thu) by nix (subscriber, #2304) [Link] (3 responses)

In fact, mentioning VMWare is redundant since we already know we can't trust closed source companies or their binary-only products.
More generally, the claim (whether stated or implied is not relevant: this is the claim as everyone understands it) that security systems make is that modulo bugs they provide some level of security. Of course, unless the system is utterly trivial, there are always bugs: but that doesn't mean the system is useless. It just means that it cannot keep out a sufficiently determined attacker.

Brad and PaXTeam assert (although they will doubtless deny this as part of the charmless dance of evasion they both employ whenever their reasoning is faulty) that this renders all such security systems worthless. But this is nonsense. The lock on my front door will not keep out an attacker who is determined enough to wade through plant growth and break my kitchen window. This is easy to detect --- but my house security systems also won't keep out an attacker who cuts through the glass of the patio door, which can be done nearly silently with enough care. The thing is that most attackers simply aren't going to do that: it's difficult and annoying and they're more likely to simply skip the house and go on to the next one, unless this is a targetted attack. The only way to keep out targetted attacks from such people is to go and live on a military base: but that merely opens me to attacks from much more powerful actors, who in time of war are likely to attack the military base and take me out as collateral damage, without even meaning to, but who wouldn't bother attacking a single anonymous house in the suburbs.

If you are under targetted attack by a sufficiently determined and ingenious attacker --- the sort of person Brad appears to be considering, who is willing to search for new remote and local vulnerabilities, write exploits for them, and target specific sites with them --- then you're in serious trouble and the best thing to do is simply get off the net until they go away (this is hardly optimal, but improving it is really up to law enforcement or network infrastructure: there's nothing individuals can sanely do). In the case where the attacker finds an exploit for a new vulnerability and launches mass attacks with it, we are somewhat protected by techniques such as ASLR, which can make many classes of exploit more *likely* to fail: mass attackers are likely to give up and go on to the next host before then. This is exactly the same sort of 'defense in depth' with non-100%-perfect but make-cracks-harder systems that Brad has been disparaging. (The strange thing is that a lot of the defences in grsecurity are of this type, so Brad obviously knows this. I'd be stunned if he didn't, 'cos it's security 101 stuff.)

All this is true no matter what security system you are discussing. All security systems for commodity OSes are really there to keep out mass attacks and attacks by the non-ingenious. Thanksfully, nearly all attacks are of these classes.

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 1:14 UTC (Fri) by hozelda (guest, #19341) [Link] (1 responses)

nix, I think you are basically correct, but an attack on a FOSS system is complicated by the fact that users can take active steps at any time to help thwart attacks. These users have a lot more information at their fingertips than those who don't have source code nor an ability to controllably change their system around whenever they feel it is necessary.

It's ridiculous how vulnerable you are when you depend fundamentally on information others are keeping secret from you.

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 7:03 UTC (Fri) by nix (subscriber, #2304) [Link]

Yes. Hell, even if they don't know the attack class and can recompile
everything, they can simply randomly perturb their system
(function-neutral changes to ABI, say) or change their architecture:
security through obscurity may be an ugly hack but the number of people
launching attacks on old MIPS boxes is minimal :)

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 17:25 UTC (Fri) by Arach (guest, #58847) [Link]

> Brad and PaXTeam assert (although they will doubtless deny this as part
> of the charmless dance of evasion they both employ whenever their
> reasoning is faulty) that this renders all such security systems
> worthless.

Excuse me, but I can't find where one of them asserts anything like that. Would you be so kind to provide a relevant quote or a link, please?

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 3:37 UTC (Wed) by Kit (guest, #55925) [Link] (19 responses)

>I'm assuming the "no actual compromise [...] has succeeded" part of the above comment was a typo.
I think you're misinterpreting what he wrote. I read it as meaning that the _goal_ is to stop the malicious program _early enough_ (by limiting system access) that the exploit won't have a chance to succeed, by limiting the surface area of the system (not that it's already 100% effective today, just that is the goal/'focus').

For example, why on earth does a web browser need r/w access to all of the user's files all the time? What about confining the browser so it can only read/write its files (configuration, cache, history, etc) but it has no DIRECT access to any other files? Downloading/Uploading could partly be partitioned off to a seperate process... for a download... Browser asks the other process it wants to save a file named 'NAME', and the other process opens the file dialog for the user to select the file... then the browser pipes the data to the other process, and that other process writes it to disk (upload would be pretty much the same, just reverse the data flow). This'd basically limit browser based exploits to *only* being able to steal your browser's own data (and would also likely not be able to persist across instances), unless an additional exploit or two are also found in the limited area that the browser can actually access (which'd still be a huge improvement over the current model of 'screw the user, protect /bin/bash!' for non-server systems).

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 4:02 UTC (Wed) by spender (guest, #23067) [Link] (10 responses)

You really should refrain from using words like "only" (especially emphasized) when talking about what arbitrary code executing in the context of a large piece of software with many dependencies and addons is limited to doing. You didn't mention kernel compromises that disable SELinux in your list of things it's limited to. Take the vmsplice exploit for instance. That exploit required mmap, munmap, pipe, and vmsplice, only 4 things which all processes on the machine were permitted to use. What files of the user the exploit could write to didn't even come into the picture.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 6:50 UTC (Wed) by hppnq (guest, #14462) [Link] (7 responses)

But in the same thread you say that it might be a good idea to improve the security of the kernel itself. How would patching the kernel help against kernel bugs?

Note that the vmsplice vulnerability needs to be exploited, actually. While it is obvious that this also actually happens, somewhere, this does not mean that we should therefore grab our wands and blow all vmsplice vulnerabilities into oblivion. I think it is rather obvious that it is a better idea to create an architecture that itself can be proven to be more secure than thinking that a random pile of code can ever be made completely secure.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:40 UTC (Wed) by spender (guest, #23067) [Link] (6 responses)

I'm a little confused by your post; maybe you can clarify a few things for me:

1) Was the use of "vulnerability" in italics a way of correcting my use of the phrase "vmsplice exploit"? My usage was correct -- I was referring to the actual publicly released exploit for the vulnerability so that I could comment on what system calls were used in it.

2) "How would patching the kernel help against kernel bugs?" Take NULL pointer dereference vulnerabilities as an example. If the kernel is unable to access userland memory directly, then these vulnerabilities become unexploitable for anything but a DoS. Would you not consider that patching of the kernel "help against kernel [vulnerabilities]"?

3) What's this architecture you're referring to? Are you saying the only options are fixing individual bugs or throwing SELinux-level complexity at the problem?

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 10:29 UTC (Thu) by hppnq (guest, #14462) [Link] (5 responses)

Was the use of "vulnerability" in italics a way of correcting my use of the phrase "vmsplice exploit"?

Err, no. It was meant to stress "vulnerability". Even if an exploit for a vulnerability exists -- and let's just assume this is always the case -- it does not mean that you are also actually vulnerable. This is perhaps the most important part of security management: know your vulnerabilities. I mentioned it because this is something you seem to overlook. There is nothing wrong with that in discussions about specific vulnerabilities, but you are dismissing entire frameworks here.

Would you not consider that patching of the kernel "help against kernel [vulnerabilities]"?

Of course it would help making the kernel more secure. But it will not rule out kernel bugs. What's more: it seems a bad idea to think that any specific part of the kernel is able to protect the kernel.

What's this architecture you're referring to?

The architecture of which, for instance, SELinux is a part. Or grsecurity. Or my shielded network cable. As opposed to saying "this piece of code is secure".

Are you saying the only options are fixing individual bugs or throwing SELinux-level complexity at the problem?

No. I am saying that security follows from principles. A bug-free kernel with a perfect SELinux implementation would still not make most people safe -- whatever "safe" means for them.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 12:48 UTC (Thu) by spender (guest, #23067) [Link] (4 responses)

I never talked about ruling out kernel bugs, you did. I was talking about making the kernel more secure, which involves reducing the number of exploitable vulnerabilities -- specifically by making certain vulnerability classes unexploitable.

"it seems a bad idea to think that any specific part of the kernel is able to protect the kernel." -- I just gave you an example that you agreed makes the kernel more secure. Do you need more examples? I listed them in another post. Why's it such a bad idea to make classes of vulnerabilities unexploitable and thus prevent someone from being able to take advantage of an applicable vulnerability for the purpose of arbitrary code execution in the kernel?

You said that it's a bad idea to think that the kernel can protect itself, but then you go on to say that SELinux is a good way of protecting the kernel (or at least, better than fixing a couple bugs). I agree -- it's better than fixing a couple bugs, but it's the wrong approach for protecting the kernel. And like it or not, a lot of what SELinux bothers itself with is an attempt (either directly or indirectly) to protect the kernel. There's no reason really to restrict some obscure system call that an application doesn't use in any of its code-paths unless you're assuming the possibility of arbitrary code execution. Even then, there's not much point in restricting the obscure system call (there are plenty more useful ones for the attacker that aren't restricted) unless you're trying to reduce the attack surface of the kernel.

It should be clear that using SELinux to try to protect the kernel isn't very successful, especially in the face of remote exploits (as noted earlier). It calls into question the usefulness of the additional complexity required for the vain attempt at keeping attackers either from doing things they don't want to do that they can do in other ways, or from exploiting the kernel. It seems to me that it makes more sense to make classes of vulnerabilities unexploitable. It adds no additional complexity or burden on the user and has demonstrated itself to be more useful against real attacks. Take for instance, as I mentioned in another post, about what's been done in terms of hardening userland applications invisibly -- these kinds of changes (when implemented properly at least) are incredibly useful. In fact, the addition of a protection to SELinux (which really sticks out from its other features, and was contributed by a 3rd party) which is essentially PaX's MPROTECT feature, is actually one of its most useful protections. What's needed is more of that reality-based security, not academic pie-in-the-sky solutions. The former's been working well for years, the latter is still struggling for over a decade to be relevant.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 14:57 UTC (Thu) by hppnq (guest, #14462) [Link] (3 responses)

Why's it such a bad idea to make classes of vulnerabilities unexploitable and thus prevent someone from being able to take advantage of an applicable vulnerability for the purpose of arbitrary code execution in the kernel?

It is not a bad idea, although I can't see what you mean by "unexploitable". What I was trying to say is not rocket science, nor is it clouded in riddles.

I never talked about ruling out kernel bugs, you did.

Why on earth do you waste your time on this? Yes, I confess: I did mention ruling out kernel bugs. I invite you to read it again.

Anyway. That Usenix article about automated kernel patching was quite interesting, but also quite silly. Talk about academic pie-in-the-sky solutions. Talk about ruling out kernel bugs.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 16:05 UTC (Thu) by spender (guest, #23067) [Link] (2 responses)

The Usenix article was linked for its quantification of kernel vulnerabilities at any given time, specifically those that are silently fixed or mislabeled by vendors. That's why I specifically quoted that part in my other post; my linking to it doesn't imply my agreement with its conclusions -- I completely disagree with their conclusion/solution.

What don't you understand about "unexploitable"? Understanding that would be pretty important in determining whether the thing I mentioned helps kernel security or not, wouldn't it? You said the example I gave both helps kernel security and is not a bad idea, but neither of those things match up with what you said earlier:
1) "How would patching the kernel help against kernel bugs?"
2) "it seems a bad idea to think that any specific part of the kernel is able to protect the kernel."

So yes, what you're trying to say is clouded in riddles, because it doesn't make any sense whatsoever.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 17:58 UTC (Thu) by hppnq (guest, #14462) [Link] (1 responses)

You keep seeing things black and white. So to you, with the right kernel patch (grsecurity, I presume) in place, things become "unexploitable" at one end of the spectrum, while one vulnerablity in SCTP blows away SELinux completely at the other end of the spectrum.

What I am saying is: neither grsecurity nor SELinux will give you the security you claim they do (not) provide, unless you also seriously look at other factors. This is extremely straight forward, the Five Things To Keep In Mind point this out as well. (Your rant and vulnerability disclosure in Thing 2 shines a remarkable light on the Usenix paper indeed.)

I am not sure whether I am really unclear, or whether you really don't understand what I mean, but I think I have said enough about this now.

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 19:08 UTC (Fri) by Arach (guest, #58847) [Link]

> You keep seeing things black and white. So to you, with the right kernel
> patch (grsecurity, I presume) in place, things become "unexploitable" at
> one end of the spectrum, while one vulnerablity in SCTP blows away
> SELinux completely at the other end of the spectrum.

Brad was talking about making a *single* class of bugs unexploitable *by design* (with hardware-enforced restrictions of memory management), not about any "things" becoming unexploitable ever.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 15:23 UTC (Wed) by Kit (guest, #55925) [Link] (1 responses)

>You really should refrain from using words like "only" (especially emphasized) when talking about what arbitrary code executing in the context of a large piece of software with many dependencies and addons is limited to doing.
Did you miss where I said 'unless an additional exploit or two are also found in the limited area that the browser can actually access'? Surely limiting the surface area that an exploit could possibly happen is a GOOD thing? And the reason I said 'only' is because in this situation, if the browser is exploited, it can't just immediately copy all your sensitive data to $EVIL_HACKER then wipe your home directory.

>What files of the user the exploit could write to didn't even come into the picture.
Yes it does. The user cares about HIS data when it comes to desktop systems (which this sandbox is an attempt to help protect), and the traditional security model does pretty much NOTHING to protect that on a standard desktop. Not all systems are far off remote servers where no one ever logs in locally, they deserve security systems designed for their situations which so far the traditional systems have largely failed at.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:26 UTC (Wed) by spender (guest, #23067) [Link]

You're mixing up terminology. You used the word "exploit" which has a very specific meaning, but it seems like you're now wanting to be credited for meaning "vulnerability." When you say "unless an additional exploit or two are also found in the limited area that the browser can actually access" you're saying that there exist exploit binaries on disk which the browser process is allowed by SELinux to access and execute. In which case, I didn't miss anything at all and it's you who doesn't understand the meaning of "arbitrary code execution."

Now, if you *meant* to say that "unless there is an additional vulnerability or two in the code-paths of the kernel that a large and complex binary like a browser can reach," then we'd be in agreement.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 7:24 UTC (Wed) by tzafrir (subscriber, #11501) [Link] (2 responses)

So I downloaded a huge file in the browser and now I need to wait for it to be copied through a pipe?

Nice.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 10:07 UTC (Wed) by dgm (subscriber, #49227) [Link]

It's almost for sure that your system can pipe data faster than your network connection can provide. Several orders of magnitude faster, tipically.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 15:41 UTC (Wed) by Kit (guest, #55925) [Link]

If I take it right, you thought I meant that the file is first downloaded to a file that the browser can write to, then when it's finished read that file and pipe it to the other process? If so, that's not really how the system would work as I interpreted it.

How I see it is this: 1.) The browser wants to download a file (either by the user explicitly clicking on the link or via javascript, or whatever)
2.) The browser notifies the download service (also with the recommended filename, as well as the mimetype)
3.) The download service opens the desktop environment's normal file save dialog box
4.) The user decides where to save the file
5.) The download service tells the browser that the download was approved
6.) The browser beings downloading the data from the remote server
7.) The browser writes that data to the pipe to the download service (*not* to a temporary file or anything)
8.) The download service writes that file's data to disk

At no point is the browser having to write data to the disk, all the data is immediately being transferred over the pipe.

For more security, the browser itself could be further broken up into multiple parts, akin to how Chrome is structured... which'd help isolate the X server from the remote data (I'd imagine that the X server would probably be the weak link in this situation), not to mention having the added benefit of one tab not slowing down all the others (at least in an ideal world).

The SELinux Sandbox and small utility programs

Posted May 27, 2009 15:23 UTC (Wed) by davecb (subscriber, #1574) [Link] (2 responses)

Back in the days of mainframes, you specified the files or other resources a program was going to need in a "job control" language (JCL).

If one collects and saves the jcl for all sorts of programs, we can then use SE Linux policies to limit them to only using the resources they need, making attacks by subverting programs much more difficult. Now an attacker needs to not only modify the program, but also change an SE Linux policy.

--dave

The SELinux Sandbox and small utility programs

Posted May 27, 2009 18:57 UTC (Wed) by Trelane (subscriber, #56877) [Link] (1 responses)

could this perhaps be done through extended attributes?

The SELinux Sandbox and small utility programs

Posted May 27, 2009 19:10 UTC (Wed) by davecb (subscriber, #1574) [Link]

The label and permission data is stored in an attribute of sorts, although they're different from the user-settable extended attributes.

--dave

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 17:55 UTC (Wed) by nix (subscriber, #2304) [Link] (1 responses)

Of course AppArmor was doing this years ago.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 12:28 UTC (Thu) by nix (subscriber, #2304) [Link]

... only it wasn't because I misread. It lets you constrain things, but it doesn't let you say 'only pipe access'.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 8:22 UTC (Wed) by epa (subscriber, #39769) [Link] (2 responses)

For example, you might run a script to encode all your FLAC music into Ogg Vorbis. That script will run as you, pr1268, so traditional Unix access control gives the script access to all of the files marked as owned by pr1268.
It seems the root of the problem (no pun intended) is that creating new users is such a heavyweight operation. It's like creating branches in CVS or SVN. You have to have root access to the whole system and edit some centralized files. Cleaning up a user is even more tedious (you have to check for any files the user owns). It would be better if there were a lightweight way to create new users, so user fred could create fred_x that has a subset of fred's permissions, and launch a process as user fred_x with certain capabilities such as network access masked out. Then when the process is finished, fred_x disappears (it was only visible to fred anyway).

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:52 UTC (Wed) by mstone (subscriber, #58824) [Link]

Do check out Plash, Rainbow, and CLONE_NEWUSER for three different takes on how this task might be approached...

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 12:37 UTC (Thu) by nix (subscriber, #2304) [Link]

Yes indeed. Long ago in the mid-1990s I had a pile of fugly sudoed shell scripts on Solaris that did exactly this: users could create and remove subusers that belonged to them, transfer files into those users and get them back afterwards. It was stymied by several things: lack of kernel support for 'subusers' (i.e. I wanted to express that user A could access all files belonging to user subA but not vice versa); and the fact that it was written in the shell, which meant I was never really confident that it wasn't actually adding security problems.

I should do it again, probably with help from PAM and/or userv this time to do the privileged gruntwork.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 3:26 UTC (Wed) by jamesmrh (guest, #31622) [Link] (27 responses)

An example of where sandboxing is likely useful is the case of the web browse, where you might have a flawed jpeg renderer linked in, and you load a malicious image which causes arbitrary code to be executed on your system, which might then do something like install a spam bot or post your private keys to some irc channel.

This is an application of the principle of least privilege (and arguably "least authority" in this case, as the sandbox only has access to FDs passed in by the caller).

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 12:22 UTC (Wed) by MathFox (guest, #6104) [Link] (2 responses)

Running Firefox in a sandbox, protecting the user from malicious plugins and websites, sounds like a good idea. I don't see much use in sandboxing simple programs like cp and mv, verifying something as complex as Firefox is hard enough, even when you are ignoring plugins.

I wonder whether it is possible to get a storage abstraction layer like GnomeVFS security audited or properly sandboxed?

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 12:35 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

http://danwalsh.livejournal.com/15700.html

Fedora installs nspluginwrapper even on 32-bit systems forcing the plugins to run in a separate process which is then confined by a policy, configurable with a boolean. Increases stability and security

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 14:29 UTC (Wed) by MathFox (guest, #6104) [Link]

Running the plugins in a separate (restricted) process is only part of the solution; one should handle all code and data from a webserver as untrusted. The Chrome way: splitting off download and rendering of a webpage into a separate process allows to sandbox the most critical part of webbrowsing.

I think that it is correct to "taint" OOo after it has read an untrusted document... who tells me that it doesn't contain bad macros? (It appears that Dan Walsh balanced "ease of use" and "security" differently.)

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 17:55 UTC (Wed) by PaXTeam (guest, #24616) [Link] (23 responses)

> which causes arbitrary code to be executed on your system, which might
> then do something like install a spam bot or post your private keys to
> some irc channel.

...or exploit a kernel bug, disable SELinux, escape the sandbox and all the other bad things you're saying you're protecting users from.

> This is an application of the principle of least privilege[...]

instead it's giving innocent users a false sense of security. but if you actually believe your own statements, you're free to give the whole world arbitrary code execution rights on your personal box and see how long it'll last ;).

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 18:31 UTC (Wed) by Kit (guest, #55925) [Link] (21 responses)

>> which causes arbitrary code to be executed on your system, which might
>> then do something like install a spam bot or post your private keys to
>> some irc channel.

>...or exploit a kernel bug, disable SELinux, escape the sandbox and all
> the other bad things you're saying you're protecting users from.
Which would be harder to pull off than not having to find and successfully exploit a kernel bug.

>> This is an application of the principle of least privilege[...]

>instead it's giving innocent users a false sense of security. but if
>you actually believe your own statements, you're free to give the
>whole world arbitrary code execution rights on your personal box and
>see how long it'll last ;).
Any false sense of security would be the fault of the presentation, not the implementation. Would it be fool proof? Of course not, nothing is. Would it raise the bar, making it less likely for your system to be successfully compromised? Yes, at least once the implementation is matured and when used properly.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 20:00 UTC (Wed) by PaXTeam (guest, #24616) [Link] (20 responses)

> Which would be harder to pull off than not having to find and successfully exploit a kernel bug.

that is, 'doing something is harder than not doing it'. did you try to say something meaningful here? and out of curiosity, what do you know about finding and exploiting kernel bugs? so far you seem quite confused between 'vulnerability' and 'exploit', so it might be a good idea to clear those terms up first.

> Any false sense of security would be the fault of the presentation, not
> the implementation. Would it be fool proof? Of course not, nothing is.
> Would it raise the bar, making it less likely for your system to be
> successfully compromised? Yes, at least once the implementation is
> matured and when used properly.

i don't follow you here. how can the implementation (of what, btw? kernel? SELinux? this new sandbox?) both mature and not be fool-proof at the same time? obviously exploitable kernel bugs will never go away, nor will the false sense of security, apparently. where did you say your most valuable personal box can be accessed again ;)?

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 20:54 UTC (Wed) by nix (subscriber, #2304) [Link] (2 responses)

Given that it is likely almost impossible to eliminate *all* security bugs
in Linux, even all root-granting bugs in the kernel, and is certainly
impossible to prove that they're all gone, what would you recommend? That
we give up implementing *any* other security mechanisms until, what? Until
you say the kernel is secure enough now?

Perhaps we should just junk Linux and switch to a proper capability-
based-security system, that's of course thoroughly non-POSIX but at least
can be proven secure more easily... and then realise that SMM holes and
FireWire's lovely remote-DMA features mean that we're *still* insecure...

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:10 UTC (Wed) by PaXTeam (guest, #24616) [Link] (1 responses)

> what would you recommend?

that you take that risk into account and live with it. instead of burying your head in the cushy cloud of false sense of security.

> That we give up implementing *any* other security mechanisms until, what?

security mechanims obviously must be implemented (you may have heard of intrusion prevention systems? such concepts are equally applicable to kernel land as well). fake mechanisms should not be, and especially they should not be presented to the gullible public as something they are not. both SELinux and this sandbox fail fatally if the threat model includes arbitrary code execution (which is what James Morris said). and of course you can prevent arbitrary code execution with much less than the overly complex SELinux. unfortunately such environments are less than usable for the average person, so this problem is far from solved.

> but at least can be proven secure more easily...

no they cannot. for any non-trivial system (read: something you'd actually install and use on a daily basis as you do now) the complexity of the code that solves our problems cannot be really reduced (that's why microkernels, hypervisors, 'solution' du jour are not more secure either).

> and then realise that SMM holes and FireWire's lovely remote-DMA
> features mean that we're *still* insecure...

SMM 'holes' are irrelevant in the real world as exploiting/abusing them requires privileges that you want to get in the first place normally. firewire et al. imply a different threat model (hw) and hence different solutions (physical protection, nothing unsolvable there if you can pay the price).

Walsh: Introducing the SELinux Sandbox

Posted Jun 1, 2009 12:53 UTC (Mon) by hozelda (guest, #19341) [Link]

>> that you take that risk into account and live with it. instead of burying your head in the cushy cloud of false sense of security.

The risk of finding kernel bugs and exploiting them varies over time. Most security measures only address certain types of attacks anyway and so leave holes as well.

I believe selinux is a useful cog in the security machinery because it can limit the damage of exploited userland applications. It's a very useful function, for example, to keep a browser from trashing all your files, and this can happen without a kernel exploit. There are a lot more lines of app code written than kernel code. selinux can provide protection for failures in the former (exploits as well as out-of-control bugs).

To rephrase, selinux allows mitigating damage from vulnerabilities that can happen from much of the software on the system. In exchange, to preserve selinux itself, you have to protect the entirety of the kernel, but this still forms a small fraction of all the code on a machine (and it is code that has attracted a larger than average number of eyeballs and expertise).

Much of the kernel source added periodically is for drivers. Many drivers never even come into play on any given system.

>> you may have heard of intrusion prevention systems? such concepts are equally applicable to kernel land as well

This is orthogonal to selinux. It does not replace selinux functionality.

>> fake mechanisms should not be, and especially they should not be presented to the gullible public as something they are not

Legitimate complaint, but I have seen a number of people on this thread and elsewhere accept the limitations of selinux. Who is lying to the public?

>> both SELinux and this sandbox fail fatally if the threat model includes arbitrary code execution (which is what James Morris said). and of course you can prevent arbitrary code execution with much less than the overly complex SELinux

But this is again orthogonal to what selinux does.

>> the complexity of the code that solves our problems cannot be really reduced (that's why microkernels, hypervisors, 'solution' du jour are not more secure either)

From wikipedia: "Many definitions [of 'complexity'] tend to postulate or assume that complexity expresses a condition of numerous elements in a system and numerous forms of relationships among the elements."

You can reduce complexity by managing the number and quality of the parts and the inter-relationships among them.

Looked at differently, it's not hard to imagine that you can make something more complex (and buggy) without gaining any benefits; hence, this means that not everything is at the same level of complexity or correctness [which implies some things are better than others].

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 22:29 UTC (Wed) by foom (subscriber, #14868) [Link] (16 responses)

In your world, I guess we should get rid of all access restrictions in Linux, because the kernel I'm
running may have some exploitable vulnerabilities, so any access restrictions are completely
meaningless.

In the real world, people do run multiuser linux machines.

Security is not black and white, there is such a thing as more secure and harder to break into.

This is one more link in the chain, designed to help secure single-user machines. Now, not only
do you need to be running a vulnerable JPEG rendering library to have your files stolen, you
*also* need to be running a kernel which is exploitable in the limited attack surface presented to
the JPEG decoding process.

Surely it's a good thing to attempt to limit the attack surface?

> where did you say your most valuable personal box can be accessed again ;)?

Here:
http://www.coker.com.au/selinux/play.html

(okay, it's not mine :)

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 23:03 UTC (Wed) by PaXTeam (guest, #24616) [Link] (15 responses)

> In your world, I guess[...]

in my world, you guess wrong ;). in my world, (exploitable) bugs are a fact of life and i've been working on intrusion prevention technologies for some years now.

> In the real world, people do run multiuser linux machines.

and did someone say otherwise? ;)

> Security is not black and white, there is such a thing as more secure and harder to break into.

did someone say otherwise? how did this even come up in this thread?

> you *also* need to be running a kernel which is exploitable in the
> limited attack surface presented to the JPEG decoding process.

you sound as if it was that hard to find such a kernel. here's the breaking news for you: *any* kernel in existence has exploitable bugs in it. exploitable 'in the limited attack surface presented to the JPEG decoding process'. remember do_brk? or mremap? are you gonna ban memory allocation? it's especially funny that you talk about attack surface here when the original solution (seccomp) did in fact have a meaningful reduction, unlike this alternative.

> Surely it's a good thing to attempt to limit the attack surface?

yes, except SELinux or this sandbox don't do it. the proper way is to prevent arbitrary code execution as a start or make kernel bugs unexploitable (for privilege elevation at least).

> http://www.coker.com.au/selinux/play.html

which part of 'most valuable personal box' was not clear ;)? or are you suggesting that all SELinux can protect in real life is worthless data? the fact that you're not putting your own box at risk speaks for itself.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 0:50 UTC (Thu) by jamesmrh (guest, #31622) [Link] (14 responses)

> the proper way is to prevent arbitrary code execution as a start or make kernel bugs unexploitable (for privilege elevation at least).

If you have patches which can be upstreamed, please post them to lkml and work with the x86 and security folk.

I've not been involved in prior discussions of Pax patches specifically, so I don't know what the overall status is, but I'm sure more must be possible than having two groups sitting apart like this.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 17:33 UTC (Thu) by PaXTeam (guest, #24616) [Link] (13 responses)

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 21:57 UTC (Thu) by nix (subscriber, #2304) [Link] (12 responses)

So... you've never tried to upstream your changes and are in fact actively
against it, you complain when people implement variations on them because
they're not done *exactly* the way you'd like (e.g. exec-shield)... and
then you complain that kernel security isn't good enough and that people
don't listen to you.

Just so that's clear.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 22:11 UTC (Thu) by PaXTeam (guest, #24616) [Link] (5 responses)

> you've never tried to upstream your changes

correct.

> and are in fact actively against it,

not correct.

> you complain when people implement variations on them

not correct.

> because they're not done *exactly* the way you'd like (e.g. exec-shield)...

not correct.

> and then you complain that kernel security isn't good enough

correct.

> and that people don't listen to you.

not correct.

> Just so that's clear.

is it? ;)

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 22:58 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

Well now you're arguing against your own stated positions in comments
here, so I'll leave you to argue with yourself.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 23:56 UTC (Thu) by PaXTeam (guest, #24616) [Link] (3 responses)

> Well now you're arguing against your own stated positions in comments
here,

then you can surely quote me back point by point? ;) unless of course you don't want to risk to fall on your own sword again, as you did in the past so many times (in this very thread too ;).

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 7:00 UTC (Fri) by nix (subscriber, #2304) [Link] (2 responses)

I simply can't be bothered. Life is much, much too short. If this rubbish
was worthy of detailed cites I'd have given them.

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 7:54 UTC (Fri) by PaXTeam (guest, #24616) [Link] (1 responses)

In other words, you were just trolling all the time as usual.

Just so that's clear.

Walsh: Introducing the SELinux Sandbox

Posted Jun 2, 2009 12:51 UTC (Tue) by nix (subscriber, #2304) [Link]

No, I'm smashed on hayfever drugs and just spent a day in hospital after
yet another bout with anaphylactic shock. I have no intention of wasting
time searching a site that doesn't do per-author search to prove to
someone what his own words say. If you can't remember what you said a day
or less ago, I don't care.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 22:27 UTC (Thu) by spender (guest, #23067) [Link] (5 responses)

The PaX team wasn't involved in this, but I had actually asked Linus back in 2004 if PaX would ever be accepted into the mainline kernel. His response meant that essentially no feature of PaX at the time (except for perhaps PAGEEXEC on some non-x86 architectures) would be accepted. For record-keeping purposes, here's the mail:

Date: Sun, 12 Dec 2004 10:35:53 -0800 (PST)
From: Linus Torvalds <torvalds@osdl.org>
To: spender@grsecurity.net
Subject: Re: memory leak in drivers/net/wan/cosa.c

On Sun, 12 Dec 2004 spender@grsecurity.net wrote:
>
> While I'm on that subject, would it ever be possible to PaX
> (http://pax.grsecurity.net) merged into the mainline kernel?

Hmm.. I usually don't merge stuff unless users actually ask for it, and I
haven't seen much discussion there.

That said: I absolutely _detest_ anything that does x86 segmentation. I
actually try to kill all uses of segmentation, it's a horrid thing, and
it's finally going away. NX makes the last reason for it obsolete, and I
don't want to add kernel code for something I dislike _and_ believe has no
future.

Also, randomization is similarly on my list of "absolutely evil" cases,
simply because it makes one of my pet things unusable: prelinking. I think
program startup costs are about the most importatnt things there are in
life (well, in _kernel_ life, at least), so pre-linking ends up being
something I consider very important. That means that randomization falls
under the heading of "not for regular use" as far as I'm concerned.

Of course, people who are really security-conscious may well want to do
randomization, I just dislike it as a _standard_ thing. But if it isn't
standard, then it has little point, methinks - the whole point is to make
it harder to attack standard installations, no?

> If you would be interested in merging at least parts of the code, the
> PaX team would be willing to do the work to have it included.

I don't have anything against merging individual features that would make
it easier for you guys, but see above on what I consider to be primary
objectives: no obsolete hw features that mess up generic code, and fast
process linkage startup. Which is why I tend to like a static NX kind of
setup.

But any particular detail I'm more than happy to have argued for, for
example:

> BTW, in arch/i386/kernel/entry.S, there is an information leak of the
> ebp kernel register. A fix for it has been in the PaX patch, but
> basically the solution is to add "xorl %ebp,%ebp" after "jne
> syscall_exit_work".

Yes, that seems to be a perfectly valid thing to do, although the only
thing it leaks is just the thread info address, so it's not like it leaks
random information that might contain any interesting data depending on
which system call it was. But an xorl is certainly not expensive, and it's
nicer to not leak anything at all.

Linus

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 22:38 UTC (Thu) by dlang (guest, #313) [Link] (1 responses)

am I reading the same e-mail that you are?

I see Linus arguing against specific features, but saying

quote:
I don't have anything against merging individual features that would make
it easier for you guys, but see above on what I consider to be primary
objectives: no obsolete hw features that mess up generic code, and fast
process linkage startup. Which is why I tend to like a static NX kind of
setup.

But any particular detail I'm more than happy to have argued for, for
example:

endquote

that sounds to me like he doesn't agree with everything, but is very willing to look at individual features, some of which can be accepted, others of which may not be.

the fact that nobody has made the effort to break up the PaX changes and present each one on it's own merits does mean that they will definantly not go in as-is. if other people create patches (either completely independantly, or based on the concepts of PaX) they are going to be different, but since theya re trying to address the same problem it's very likely that they will end up being very close to the same

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 22:59 UTC (Thu) by spender (guest, #23067) [Link]

As I had mentioned, nearly all of the features of PaX at the time were covered under those two things Linus said he wouldn't accept. The only remaining feature that he would accept would have been PAGEEXEC for non-x86 architectures -- code that nearly no one uses, changes very rarely, and wouldn't have saved the PaX team any time by merging it into mainline. Also consider that at the time, some of those architectures weren't capable of sustaining non-executable pages in userland without some kind of emulation on glibc, which means the changes to those architectures wouldn't have been accepted either. Furthermore, regarding the merging of small, individual changes, the PaX Team already discussed that here: http://lwn.net/Articles/315164/

I thought it was clear already, but hopefully that resolves any dangling questions.

-Brad

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 23:33 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

That said: I absolutely _detest_ anything that does x86 segmentation.
Yet PAE does segmentation, and Linus accepted that.

He does sometimes change his mind, y'know.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 23:44 UTC (Thu) by PaXTeam (guest, #24616) [Link] (1 responses)

> Yet PAE does segmentation,

haha, if anyone had any doubt about your expertise in these security matters, you cleared that all up in those four words ;)

Walsh: Introducing the SELinux Sandbox

Posted May 29, 2009 6:59 UTC (Fri) by nix (subscriber, #2304) [Link]

I never claimed to be any kind of expert on the innards of x86, and I'm
high on hayfever drugs.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 23:26 UTC (Wed) by jamesmrh (guest, #31622) [Link]

Ok, perhaps I should clarify and always include the caveat that SELinux cannot be expected to protect against kernel vulnerabilities, because it is part of the kernel.

There will always be the possibility of kernel security holes, because:

- all software has bugs
- the kernel is software
- some bugs are security holes

this will *never* not be the case.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 7:26 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

You mean like this: "changehat some_restricted_profile cat /etc/passwd" ?

It was supported in AppArmor for _ages_.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 18:24 UTC (Wed) by talex (guest, #19139) [Link]

Which package is this command in?

I've got apparmor-utils 2.3+1289-0ubuntu14 but it doesn't seem to be there.

But the really important thing is to have a suitable sandbox policy installed by default so that applications can use it automatically, without having to get root access first to install the policy. This would probably remove the need for plash to be setuid root too.

One of the things I'd like to use it for would be sandboxing archive extraction. In Zero Install, we unpack downloaded archives and then check the contents against a digest, so it would be really useful to sandbox the extraction process to guard against malicious packages trying to exploit flaws in tar, etc.

Walsh: Introducing the SELinux Sandbox

Posted May 28, 2009 0:11 UTC (Thu) by jamesmrh (guest, #31622) [Link]

Changing the security context when launching an app has always also been part of SELinux (e.g. 'runcon'). This is a specific system for sandboxing an application so it has no privileges except via the FDs passed to it by the caller.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 7:36 UTC (Wed) by pranith (subscriber, #53092) [Link] (2 responses)

Though I've recently started with Nexenta, this sounds something very similar to the zones feature in OpenSolaris. Is it so?

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 7:50 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

Zones is light weight operating system level virtualization.

http://en.wikipedia.org/wiki/Operating_system-level_virtu...

SELinux Sandbox doesn't use virtualization though there is svirt. The Linux equivalent would be things like linux-vserver and openvz.

Walsh: Introducing the SELinux Sandbox

Posted May 27, 2009 15:14 UTC (Wed) by davecb (subscriber, #1574) [Link]

Zones are created using the code that was
the equivalent of SE Linux (ie, Trusted
Solaris), but the purpose is to provide
very-low-cost secure virtual machines.

Sandboxes are much finer-grained.

--dave


Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds