Posted Nov 18, 2004 3:15 UTC (Thu) by walters
Parent article: Civilizing SELinux
This is a long article, and a detailed reply is a bit annoying to do in a comment. I'll give it a shot anyways :)
So a couple of questions come immediately to mind: how is it possible for anybody to truly understand a system's security policy, and how can that policy be shown to be correct? Complexity and obscurity are enemies of security, and SELinux has large amounts of both.
In general, I don't think anyone out there understands everything that goes on in a Linux system, particularly as projects like Fedora drive towards further integration. A very good example is the recent work we did on allowing a user to set the printer driver for CUPS. This involved a hotplug script from HAL which would talk to the user session, which was listening on a D-BUS service. When the user picked a driver, the cups-config-daemon process would run printconf-backend and send SIGHUP to CUPS.
I extended the CUPS SELinux policy for this, and it was not particularly easy; I kept being surprised by what was being executed, what files were being changed (e.g. /etc/alchemist/namespace/printconf/local.adl). But rewriting the code was not an option, so SELinux had to be able to adapt to the complexity of the system.
The second point I want to make here is that yes, the SELinux example policy is complex, but there exist tools to analyze it. For example, apol from Tresys.
Installing a new program on a full-blown SELinux system required updating the security policy.
Not necessarily. For example, in the "targeted" policy, only a few select daemons are confined. Any new software you install is unrestricted by SELinux by default. Even in the strict policy, you have the option to mark executables as unconfined_exec_t; this means that when executed, the process will transition to unconfined_t and not be restricted by SELinux. Obviously though, it's better to write policy.
There has been talk of a day when applications are routinely shipped with SELinux policy files, just like they currently contain makefiles.
I can imagine sample policies being shipped, but there should be no expectation that a security policy can work anywhere:
Perhaps the biggest problem, though, is the assumption that a single policy file will fit into the security policies running on systems worldwide.
There is absolutely no such assumption. The NSA has made it very clear that the policy distributed on their website is only a sample policy. Distributions use it as a common reference point, but nothing in SELinux requires you to use the sample policy, or your distribution's policy. That all said, I think you would have to have some very strong requirements to deviate very far from the sample policy.
It turns out that the proper labels are stored in the SELinux policy; what's on the files themselves can be thought of as a sort of cached version.
Sort of. We've discussed this on the SELinux mailing list, and I think the general agreement was that the filesystem labels (xattrs) are not just a cache for the policy file_contexts regexps, but can be legitimately customized.
Perhaps part of the real problem with SELinux is that policies must be written in the equivalent of assembly language.
This has come up before. It is certainly possible to imagine a more coarse-grained syntax for policy writing. The difficulty I see in creating such a language is twofold; first, it could create a false sense of security. For exaple, if this higher-level language only restricts accesses to files, but allows signals to any other domain, that could be exploited fairly easily. The other issue is making it integrate nicely with the existing language; if you e.g. take the approach of autogenerating type names for files, this would look very ugly for those of us writing in the "low level" language.
to post comments)