>> that you take that risk into account and live with it. instead of burying your head in the cushy cloud of false sense of security.
The risk of finding kernel bugs and exploiting them varies over time. Most security measures only address certain types of attacks anyway and so leave holes as well.
I believe selinux is a useful cog in the security machinery because it can limit the damage of exploited userland applications. It's a very useful function, for example, to keep a browser from trashing all your files, and this can happen without a kernel exploit. There are a lot more lines of app code written than kernel code. selinux can provide protection for failures in the former (exploits as well as out-of-control bugs).
To rephrase, selinux allows mitigating damage from vulnerabilities that can happen from much of the software on the system. In exchange, to preserve selinux itself, you have to protect the entirety of the kernel, but this still forms a small fraction of all the code on a machine (and it is code that has attracted a larger than average number of eyeballs and expertise).
Much of the kernel source added periodically is for drivers. Many drivers never even come into play on any given system.
>> you may have heard of intrusion prevention systems? such concepts are equally applicable to kernel land as well
This is orthogonal to selinux. It does not replace selinux functionality.
>> fake mechanisms should not be, and especially they should not be presented to the gullible public as something they are not
Legitimate complaint, but I have seen a number of people on this thread and elsewhere accept the limitations of selinux. Who is lying to the public?
>> both SELinux and this sandbox fail fatally if the threat model includes arbitrary code execution (which is what James Morris said). and of course you can prevent arbitrary code execution with much less than the overly complex SELinux
But this is again orthogonal to what selinux does.
>> the complexity of the code that solves our problems cannot be really reduced (that's why microkernels, hypervisors, 'solution' du jour are not more secure either)
From wikipedia: "Many definitions [of 'complexity'] tend to postulate or assume that complexity expresses a condition of numerous elements in a system and numerous forms of relationships among the elements."
You can reduce complexity by managing the number and quality of the parts and the inter-relationships among them.
Looked at differently, it's not hard to imagine that you can make something more complex (and buggy) without gaining any benefits; hence, this means that not everything is at the same level of complexity or correctness [which implies some things are better than others].