Reactive vs. pro-active kernel security
Security patches are almost always a question of tradeoffs. Sometimes the protection offered outweighs the negative effects that a security-oriented fix brings—and sometimes it doesn't. In addition, pro-active security fixes often face an uphill battle to get into the kernel, especially if they cause performance or other problems, because many kernel developers are skeptical of "solutions" for theoretical problems. In many cases, these changes come under the heading of "kernel hardening", and don't correspond to a particular known security hole; instead they address a class of potential problems, which can be much harder to sell.
A good example of this can be found in Vasiliy Kulikov's recent RFC patch to implement some checks in the functions that copy data to and from user space. Copying the wrong amount of data to or from user space can lead to security problems, like code execution or disclosing the contents of kernel memory, so checking to ensure that copies are not larger than the expected data structure is certainly beneficial. But the copy_to/from_user() functions are performance-critical. In typical fashion, Linus Torvalds doesn't mince words in his reply to Kulikov:
copy_to/from_user() is some of the most performance-critical code, and runs a *lot*, often for fairly small structures (ie 'fstat()' etc).
Adding random ad-hoc tests to it is entirely inappropriate. Doing so unconditionally is insane.
He does go on to suggest that a cleaned up version which is configurable so
that only those distributions or users who want the extra checking will pay
the price for it might be acceptable. To Torvalds, the patch is more evidence of
the "craziness" of the security community: "It's exactly the kind of
'crazy security people who don't care about anything BUT security'
crap that I refuse to see.
" That, of course, is something of a
recurring theme in terms of Torvalds and other kernel hackers' reactions to
pro-active security fixes.
Ingo Molnar had a similar concern in the discussion of another of Kulikov's patches: an effort to remove control characters from log file output. Molnar is skeptical of the patch, partly because there are no specific threats that it addresses:
That is a not so fine distinction that is often missed in security circles! :-)
When an actual flaw is found in the kernel, especially if there are exploits for it in the wild, fixes are made quickly—no surprise. But theoretical flaws, or fixes that protect badly written user-space programs often have a tougher path into the kernel. Over the years, we have seen numerous examples of these kinds of patches, often coming from hardened kernel projects like grsecurity/PaX and Openwall. But, to some extent anyway, those projects are more concerned with security than they are with things like performance, and are willing to sacrifice performance to reduce or eliminate entire classes of security threats.
There is clearly a kernel (so to speak) of truth to Torvalds's complaint about "security crap", but there is also room for different kinds of kernels. It is reminiscent of the situation with SELinux in some ways. SELinux offers protections that can sometimes mitigate security problems before they come to light—exactly what pro-active security measures are meant to do—but SELinux is disabled by numerous administrators and by most distributions other than Red Hat's. For some, the extra protection that SELinux provides is not worth the overhead and problems that it can cause. Others may be more concerned about zero-day exploits and enable SELinux or run hardened kernels.
Another example of a fix that didn't make it into the kernel, though it would have eliminated a common security problem, is Kees Cook's attempt to disallow symbolic links in "sticky" directories—to stop /tmp symlink bugs (like this one from July 12). That particular fix was controversial, as some kernel hackers didn't think it appropriate to change core VFS code to fix buggy user-space programs. But moving the fix into a Linux Security Module (LSM)—along with a handful of other unrelated security fixes—didn't pass muster either.
There have also been various efforts to remove sources of information in /proc and elsewhere that can make it easier for exploits to function. Things like hiding kernel addresses from unprivileged processes, restricting page access to read-only and no-execute, protecting /proc/slabinfo, and lots of others have been proposed—sometimes adopted—over the last year or two. These kinds of fixes are often greeted with a level of skepticism (which is not so different from other kinds of patches really), and sometimes find their path into the mainline to be fairly difficult—sometimes impossible. That's not to say that any of those that were rejected should be in the kernel, but in most cases they do add some level of protection that very security-conscious users might be very happy to have.
The risk of keeping many of these pro-active hardening features out of the mainline is probably small, but it certainly isn't non-existent. There is a balance to be found; performance, maintainability, and less intrusiveness of patches are often more important to Torvalds and the kernel community than fixes that could, but might not, catch security exploits that aren't yet known. Essentially, making most users pay a performance penalty over and over again, potentially untold trillions of times, is too high a price. Fixing the problems that are found, when they are found, is the course that the mainline has (largely) chosen.
It is probably somewhat disheartening for Kulikov, Cook, and others to continually have their patches rejected for the mainline, but they do tend to be used elsewhere. Many of Cook's patches have been picked up in Ubuntu, where he is a member of the security team, and Kulikov is a student in the Google Summer of Code for Openwall specifically tasked with hardening both the Openwall kernel and upstream (to the extent he can anyway). Their efforts are certainly not being wasted, and security-conscious administrators may want to choose their distribution or kernel carefully to find the one that best matches their needs.
Index entries for this article | |
---|---|
Security | Linux kernel |
Security | Tradeoffs |
Posted Jul 14, 2011 3:10 UTC (Thu)
by Baylink (guest, #755)
[Link] (3 responses)
The argument's been made here -- also aimed at me -- that it doesn't matter whether the developers who are getting scared away are female or *not* -- the problem is the guy throwing the punches, and it doesn't matter who that is.
I think that this particular exchange gives the lie to that assertion -- even if only because both parties would assert it's "not personal; only business"... because they would be *right*.
Posted Jul 14, 2011 4:32 UTC (Thu)
by jrn (subscriber, #64214)
[Link] (2 responses)
Posted Jul 21, 2011 3:25 UTC (Thu)
by wtanksleyjr (subscriber, #74601)
[Link]
That would go on my resume. It wouldn't match any HR keywords, but many engineers would notice and remember THAT candidate.
Posted Jul 21, 2011 21:11 UTC (Thu)
by solardiz (guest, #35993)
[Link]
http://www.openwall.com/lists/kernel-hardening/2011/07/12/2
Here's Vasiliy's "GSoC midterm accomplishments" summary:
http://www.openwall.com/lists/kernel-hardening/2011/07/19/3
There was no expectation that all patches would be accepted. This project is about revising and submitting the various security hardening changes properly, which is something that hasn't been done for many of them yet because it's such a mostly thankless job to do. Vasiliy was well aware of what he was getting into. :-) Before starting this project, he found and patched many vulnerabilities in the Linux kernel (mostly infoleaks) - those patches were applied upstream, as well as in distro kernels (you can see his name in plenty of distro vendor advisories about kernel updates). He also got the ICMP sockets patch applied in Linux 3.0:
http://lists.openwall.net/linux-kernel/2011/05/13/432
At Openwall, we're very happy to work with Vasiliy on this project (as well as on some other projects - e.g., Vasiliy did some work towards the Owl 3.0 release).
Others interested in joining the project or just watching are welcome to subscribe to the kernel-hardening mailing list:
http://www.openwall.com/lists/#subscribe
Vasiliy is CC'ing kernel-hardening on his LKML postings relevant to this project, and we also use the kernel-hardening list for additional discussions (such as on what patches to bring to LKML next).
Posted Jul 14, 2011 7:56 UTC (Thu)
by dsommers (subscriber, #55274)
[Link] (6 responses)
Then as these patches are proved to be stabilised, maybe providing some real performance impact numbers, maybe even generalised or made to solve real issues in the kernel, they might be accepted gradually in the end upstream. Just like the realtime patches, where even Linus knows what's happening - but as it solves issues he can see and understand, he let them in - one by one.
Posted Jul 14, 2011 18:34 UTC (Thu)
by dlang (guest, #313)
[Link] (5 responses)
there is seldom very much disagreement on the performance impact, so 'real performance impact numbers' don't help much. the disagreement is in the value of the protection being provided.
As the article says, if someone can show a way to use one of these things in an exploit, the problem gets fixed fast (and a generic solution to the type of bug _is_ preferred when a fix like this is done)
but if there's not known way to exploit the vulnerability, or the exploit can only be done by the system administrator, there is a lot of resistance to impacting performance 'just in case someone figures out how to exploit it later'
Posted Jul 14, 2011 20:18 UTC (Thu)
by patrick_g (subscriber, #44470)
[Link] (4 responses)
Posted Jul 14, 2011 20:33 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
however, when particular patches are involved (such as the copy from/to user modifications referred to in the article), there's no disagreement that the extra checks will impact performance.
far to many security people take the stance that security checks should always be implemented, in as many places as possible, performance just doesn't matter in comparison (I'm sure I come across like this sometimes to my development folks ;-). When working on a particular installation or use case, this may be very valid, but when doing general purpose software where you don't know what it will be used for, you can't say "this change is below user perception so we'll make this change" or "we accept that we will need 101,000 machines instead of 100,000 machines to run this application, so we'll make this change"
Posted Jul 17, 2011 21:56 UTC (Sun)
by gmaxwell (guest, #30048)
[Link] (1 responses)
The applications where security is far more important than performance are no less real because other things exists. Consider the firewalls and jump-hosts that mediate administrative access to those 100,000 machines.
General purpose software doesn't mean "ignores requirements that are less important to me, but more important to others" I'd say that something general purpose software seeks to find a blended solution that works acceptably for all cases, and offers options where the needs differ.
There is obviously a maintainability concern with options, but copy from/to checks can be made fairly self-contained far more so than, e.g. the peppering of the codebase that SELinux requires. I'd think that this kind generic boundary hardening is exactly the kind of optional feature a general purpose system should have.
Posted Jul 18, 2011 8:03 UTC (Mon)
by dlang (guest, #313)
[Link]
there really are not that many situations where you do that.
Posted Jul 15, 2011 16:01 UTC (Fri)
by PaXTeam (guest, #24616)
[Link]
Posted Jul 15, 2011 2:40 UTC (Fri)
by naptastic (guest, #60139)
[Link] (3 responses)
What if these kinds of fixes were config options, default off, with big scary warnings about performance penalties when configured on, and then they **warn** about unsafe behavior by userspace programs, so that, in the perfect world where I live, administrators will test programs thoroughly, see that they're unsafe, and file bug reports with the programs' maintainers, who will happily and quickly fix their buggy userland code and push updates to the distributions?
Posted Jul 15, 2011 3:39 UTC (Fri)
by dlang (guest, #313)
[Link] (1 responses)
the cost of these things isn't just performance, it's also maintainability (especially if you end up with multiple paths due to this being a configurable option)
Posted Jul 15, 2011 5:53 UTC (Fri)
by naptastic (guest, #60139)
[Link]
I understand and agree with the rejection of these kinds of patches: it's not the kernel's job to fix user-space bugs. But as an option, under debugging or something, could it at least warn, "Hey, app developer, you've left a potential security hole"?
Maybe there's a better way to do this?
Posted Jul 16, 2011 6:29 UTC (Sat)
by djm (subscriber, #11651)
[Link]
Posted Jul 16, 2011 9:01 UTC (Sat)
by geuder (subscriber, #62854)
[Link] (15 responses)
Well, you might say of it does, it always has.
If you see computing as a technical activity like it was 1970 - 1990, I agree.
But if you that you have your private life in the machine, money and whatever. I'd say no. A user should not see who else has processes running and what they are. A user should not see who has a home directory.
Not a relevant issue you say, we all run de-facto single user PCs. Yes, we do, but I think only LWN readers and the like should. Others better use something like LTSP or ChromiumOS, some kind of lightweight client solution. Currently Linux cannot be used straightforward as the server for such a computing model. I'd assume patches required to do so would also deemed "entirely insane".
If you explictly said we support only single user machines I'd agree that many of the "security issues" are non-issues. Loss of privacy = sniffing your own data. Denial of service = turning your machine off. But I don't think anybody has said the Linux should never support building servers for multi-user systems.
Or are we just hitting the limits of one size fits all? What fits a cell phone or a wireless router, can it fit a multi-user server? And because the limits have not been agreed, people fight what is reasonable and unconditionally insane.
Posted Jul 16, 2011 18:09 UTC (Sat)
by kleptog (subscriber, #1183)
[Link] (14 responses)
If you want that kind of isolation, use VMs. But for shared systems you mostly just need to prevent people from interfering with each others work. Being able to see what processes are being run isn't often an issue.
Posted Jul 16, 2011 20:47 UTC (Sat)
by geuder (subscriber, #62854)
[Link] (13 responses)
Elementary privacy.
> able to see other people's processes and see who has a home directory just isn't an issue.
Depends on what kind of service you build. For many kinds of services privacy is a must. If you offer some kind of hosted computing or thin client server, it's just not acceptable that different customers see each other. Even within 1 company you might be legally obliged to maintain 100% isolation.
> Achieving that would require far more invasive changes for,
Probably, I have not thought very well about all the open "windows" we have today.
> as far as I can see, zero benefit.
The benefit would be that you can build a multi-user system with complete isolation on a single kernel.
> If you want that kind of isolation, use VMs.
Of course that's what I have to do today because Linux is not multi-user (if strict privacy is required). But the overhead of running VMs is orders of magnitudes higher than of having the isolation inside a single kernel.
Please note that I did not say we need multi-user support.
I'm just saying:
- Multi-user support (where user is a human, not some daemon account in the system) in 2011 requires privacy
Posted Jul 16, 2011 21:06 UTC (Sat)
by dlang (guest, #313)
[Link] (1 responses)
you are redefining the term
Posted Jul 16, 2011 23:43 UTC (Sat)
by geuder (subscriber, #62854)
[Link]
Could be. Requirements have changed a lot in the last 10-15 years. Privacy was not such an issue before, when computing was mainly about engineering. How many new operating systems have appeared in the last 10-15 years?
Hmm, thinking twice... As a matter of fact I'm not sure whether the NT kernel supports it. At least it has more fine-grained priviledges than Linux. No idea whether http://msdn.microsoft.com/en-us/library/bb530716%28v=vs.8... is a complete list, I thought I had seen even more when using ProcessExplorer[1] some years back. And what about SELinux or grsecurity? Haven't looked at them in detail, but at least http://grsecurity.net/ lists:
> A restriction that allows a user to only view his/her processes
Maybe your claim wasn't correct after all?
[1] http://technet.microsoft.com/en-us/sysinternals/bb896653 Great tool BTW for everybody curious about operating systems. Haven't seen an equivalent one in Linux.
Posted Jul 17, 2011 19:37 UTC (Sun)
by raven667 (subscriber, #5198)
[Link] (10 responses)
There are technologies in Linux like containers and all the namespaces support that's been worked into the kernel over the years to support the kind of isolation you are talking about. It's my understanding that container based virtualization is all about running multiple isolated instances of the userspace environment on a single kernel system image.
One thing that is probably missing is some easy configuration interface to do what you want for isolating users, all the infrastructure is probably there but the tools to use it the way you want may not be.
Posted Jul 18, 2011 0:30 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link] (2 responses)
Posted Jul 18, 2011 9:45 UTC (Mon)
by Klavs (guest, #10563)
[Link] (1 responses)
Posted Jul 18, 2011 16:45 UTC (Mon)
by geuder (subscriber, #62854)
[Link]
True, I forgot completely about that one. We have actually used it here in one project, but to isolate only one "untrusted guest" from the host system. Haven't thought about running tens of containers, but I could imagine that the overhead is pretty low especially compared to VMs.
But lxc would not help to get more consensus about these security "issues" this discussion started from. If the kernel were affected by some information disclosure or denial of service issues, in many cases the issue would not be limited to processes running inside the same container.
So the nice argument that within one container we can just talk about a single user system and don't worry that much about about information disclosure/denial of service/pro-active security would just not apply to many cases. No free lunch this time either :(
Posted Jul 18, 2011 16:25 UTC (Mon)
by geuder (subscriber, #62854)
[Link] (6 responses)
You mean in terms of CPU overhead, when a small number of VMs is running? I can agree. That's what I do here on my desktop all the time, because I want to run both stable versions and bleeding edge versions of different distros on the same machine.
But suppose I want to build a multi-user system. I could have e.g. 1000 accounts with some 50 of them logged in concurrently. Not an issue with a single kernel resource wise (depending on the HW of course). But running 50 VMs just to get stronger privacy??? Or even 1000??? (With the 50 VM variant I'd still need some kind of "session router", to make sure everybody logging in get a VM for his own. Doesn't sound very standard if there isn't some miracle package for this purpose out there I might have missed.) Don't see your 1-5% here, I would call it no way you do that with VMs for any reasonable price or HW.
Or maybe you can? I have seen at least one 64GB server. 1GB for every VM, you could already support more than 50 VMs without even sharing common pages or swapping. But I think the overhead on memory consumption is 10s of percents, not 1-5%. And I guess the price curve for server memory in that size is not linear. (Haven't bought anything over 4GB myself, so not sure)
Posted Jul 18, 2011 17:51 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (4 responses)
So your example of 50 VM hosts that there is no way you could do this isn't true, running desktop VMs at that level of density isn't even cutting edge and can be done on a modest dual-socket system, probably worth around $10-15k whereas 78 $500 desktops would be almost $40k.
In fact from a security perspective running desktops as virtual machines has some other benefits too in that may systems are run from snapshots off a central, read-only system image so infected machines can be easily and completely rolled back to a known good state.
Posted Jul 19, 2011 3:51 UTC (Tue)
by dlang (guest, #313)
[Link] (1 responses)
remember that the users still need to have a machine with a display and keyboard.
the advantage of virtual desktops isn't hardware savings, it's centralized management/backup/etc
Posted Jul 19, 2011 16:19 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Yes, there needs to be something at the desk to display output but you have more flexibility on quality and speed, buying cheaper machines, keeping existing old machines or even allowing users to bring their privately-owned systems to use for display only.
Yes, the centralized management is a huge win for virtualization. I also wanted to point out that it isn't cost-prohibitive as well.
Posted Jul 19, 2011 14:21 UTC (Tue)
by geuder (subscriber, #62854)
[Link] (1 responses)
True. Well, I got access to the 64GB for free already 2 years ago because it was kind of surplus for the owner organization. So I could have thought that it was no longer a high end machine.
Just checked the first Dell offer I could find and 96GB were 4000 EUR. Indeed cheaper than I thought, but still some 50 EUR per user just for RAM in such a VM installation.
But if I get your point right, you say it's getting that cheap that we can stop worrying about a single Linux being suitable for multiple users with privacy/security requirements. Just use VMs in that case.
Posted Jul 19, 2011 16:38 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Now I will point out that running multiple OS kernels in a VM environment isn't the goal, hardware memory managers support strong separation but it's just currently easier to separate jobs into different OS kernels than build and configure the same level of separation within one OS kernel. Sooner or later we will get per-process checkpointing and live migration as well as containers and namespaces such that you will have a single system image across a cluster of machines which will have better scheduling and visibility of resources.
Posted Jul 18, 2011 23:29 UTC (Mon)
by njs (subscriber, #40338)
[Link]
I think you missed the part of his comment where he clarifies that he's talking about container-style virtualization (vserver/lxc), not emulated-hardware-style virtualization (kvm/xen).
You can indeed have 50 "VMs" that are all running under the same kernel, with almost no overhead versus running all the same processes in a single "VM". (The information about which VM each process belongs to is just some extra bits in the task struct in the kernel.)
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
>>> there is seldom very much disagreement on the performance impact, so 'real performance impact numbers' don't help much.Reactive vs. pro-active kernel security
I haven't seen any performance numbers between the mainline kernel and the GRSecurity/PaX kernel. Do you have a link ? I think it will be very interesting to see a benchmark like this.
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Reactive vs. pro-active kernel security
Does Linux support multi-user?
Does Linux support multi-user?
A user should not see who else has processes running and what they are. A user should not see who has a home directory.
I don't see how this follows. I've worked on shared systems and being able to see other people's processes and see who has a home directory just isn't an issue. Achieving that would require far more invasive changes for, as far as I can see, zero benefit.
Does Linux support multi-user?
- Linux is not multi-user in that sense, it is multi-user in the sense of the 1970s or 80s
- Because the difference is not made obvious, some people write patches, which others don't accept.
- Those who don't accept the patches, don't (dare to?) say clearly that their goal is to support single user systems only.
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?
Does Linux support multi-user?