|
|
Subscribe / Log in / New account

Kernel runtime security instrumentation

Kernel runtime security instrumentation

Posted Sep 4, 2019 23:23 UTC (Wed) by Cyberax (✭ supporter ✭, #52523)
In reply to: Kernel runtime security instrumentation by kpsingh
Parent article: Kernel runtime security instrumentation

I consider SELinux to be an anti-feature and auditing a giant slowdown and a waste of time.

> mitigation based on the audited data
The only valid mitigation for a detected intrusion is to bring down or isolate the host.


to post comments

Kernel runtime security instrumentation

Posted Sep 6, 2019 13:40 UTC (Fri) by cpitrat (subscriber, #116459) [Link] (19 responses)

You can easily imagine cases where isolating the host could have worse consequences than anything the attacker could do. Having ways to react automatically and limit attacker's possibilities is still useful.

This could also be useful in honeypots.

Kernel runtime security instrumentation

Posted Sep 6, 2019 16:24 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (18 responses)

> You can easily imagine cases where isolating the host could have worse consequences than anything the attacker could do.
For example?

> Having ways to react automatically and limit attacker's possibilities is still useful.
Then why not do it from the start?

Kernel runtime security instrumentation

Posted Sep 6, 2019 16:53 UTC (Fri) by cpitrat (subscriber, #116459) [Link] (17 responses)

For example if the host is supporting a critical service, then switching to a highly protected mode (think read-only, potentially degraded mode) allows to continue serving while investigating rather than having a DoS caused by a script kiddy doing a prank.

This is just one scenario. This seems like a flexible solution that allows for some interesting tools.

Kernel runtime security instrumentation

Posted Sep 6, 2019 16:54 UTC (Fri) by cpitrat (subscriber, #116459) [Link] (1 responses)

For a more concrete example, the degraded mode could be a self driving car pulling over or giving back control to the driver.

Kernel runtime security instrumentation

Posted Sep 6, 2019 20:33 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

This is actually an example where isolation is the best policy.

Kernel runtime security instrumentation

Posted Sep 6, 2019 16:58 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (13 responses)

There are so many things wrong with this picture:
1) A single critical server.
2) Accessible through the Internet.
3) To a script kiddie.

> This is just one scenario. This seems like a flexible solution that allows for some interesting tools.
This seems like an overengineered solution for a non-problem.

Kernel runtime security instrumentation

Posted Sep 6, 2019 18:19 UTC (Fri) by cpitrat (subscriber, #116459) [Link] (12 responses)

1) A single critical server.
I didn't say there was a single one. There can be multiple one and they all get compromised at (more or less) the same time by the same person.
2) Accessible through the Internet.
If the service is available through the Internet, that's unavoidable. The server could have been exploited through the service it provides.
3) To a script kiddie.
See 2)

Kernel runtime security instrumentation

Posted Sep 6, 2019 20:35 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

So isolate all of them. The "active countermeasures" nonsense is just crap. There's not much you can do once an attacker is in and you have to let them continue.

Kernel runtime security instrumentation

Posted Sep 7, 2019 16:47 UTC (Sat) by kpsingh (subscriber, #112411) [Link] (10 responses)

You say that once you find out you are attacked, the host should be completely isolated. (While I do not agree with this). Even if one was to agree with you, the detection cannot simply happen without effective monitoring of security signals.

Whether you chose to block the specific malicious activity or the host itself is a decision you can make.

Kernel runtime security instrumentation

Posted Sep 7, 2019 18:44 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

> You say that once you find out you are attacked, the host should be completely isolated. (While I do not agree with this). Even if one was to agree with you, the detection cannot simply happen without effective monitoring of security signals.
And so why does this need yet even more eBPF crap?

Kernel runtime security instrumentation

Posted Sep 7, 2019 18:58 UTC (Sat) by kpsingh (subscriber, #112411) [Link] (6 responses)

Because you can create signals dynamically with the krsi eBPF

PS: I don't intend to reply further if your communication stays unprofessional.

Kernel runtime security instrumentation

Posted Sep 7, 2019 19:07 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

What signals? Seriously, where is an example of use with an example of mitigated damage and that can't be done with existing audit infrastructure?

All I see is handwaving like:

> There could be dynamic whitelists or blacklists of various sorts, for kernel modules that can be loaded, for instance, to prevent known vulnerable binaries from executing, or stopping binaries from loading a core library that is vulnerable to ensure that updates are done.
If you have a "vulnerable binary" then why the hell it's not deleted?

For me personally the last thing I want is more of SELinux-style security theater that _will_ inevitably break in various exciting ways.

Kernel runtime security instrumentation

Posted Sep 7, 2019 19:21 UTC (Sat) by kpsingh (subscriber, #112411) [Link] (4 responses)

You yourself mentioned auditing is a giant slowdown. So, you are now contradicting yourself!

Patching / deleting a binary on a really huge number of servers cannot be done in seconds.

Can you audit environment variables with audit? No you cannot!

What do you need to do to add support? Change a lot of stuff, the policy language, auditd, parsers etc.

The development cycle for adding a new signal and then some new policy based on the signal, e.g. a permission error if you set the same environment variable twice, touches many components and this is an attempt to fix that.

Kernel runtime security instrumentation

Posted Sep 7, 2019 20:26 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> You yourself mentioned auditing is a giant slowdown. So, you are now contradicting yourself!
And so will be eBPF. There's also a low-hanging fruit of using BPF to JIT-compile the audit rules.

> Patching / deleting a binary on a really huge number of servers cannot be done in seconds.
What does it have to do with audit slowness?

> Can you audit environment variables with audit? No you cannot!
> What do you need to do to add support? Change a lot of stuff, the policy language, auditd, parsers etc.
Do environment variables actually pose a significant threat to warrant a new full-blown, user-controlled arbitrary code injection facility on the critical paths? Can it itself be abused to create livelocks/deadlocks? Can an adversary use it to frustrate efforts to recover? ....

> The development cycle for adding a new signal and then some new policy based on the signal, e.g. a permission error if you set the same environment variable twice, touches many components and this is an attempt to fix that.
I contend that none of this is even needed, as it's going to be useless and trivial to bypass.

Kernel runtime security instrumentation

Posted Sep 7, 2019 20:52 UTC (Sat) by kpsingh (subscriber, #112411) [Link] (2 responses)

>And so will be eBPF. There's also a low-hanging fruit of using BPF to JIT-compile the audit rules.
^^^^^^^^^^^^^^^^^^^
We are doing performance comparisons and it's not.

>> Patching / deleting a binary on a really huge number of servers cannot be done in seconds.
> What does it have to do with audit slowness?

It's got to do with your statement: "If you have a "vulnerable binary" then why the hell it's not deleted?"

> Do environment variables actually pose a significant threat to warrant a new full-blown, user-controlled arbitrary code injection facility on the critical paths? Can it itself be abused to create livelocks/deadlocks? Can an adversary use it to frustrate efforts to recover? ....

Environment variables is one use case where one needs to use a signal that audit does not currently provide. We are **not** talking about unprivileged eBPF here, it needs CAP_SYS_ADMIN and CAP_MAC_ADMIN. If privileged users want to shoot themselves in their feet, they have plenty other opportunities.

> The development cycle for adding a new signal and then some new policy based on the signal, e.g. a permission error if you set the same environment variable twice, touches many components and this is an attempt to fix that.
> I contend that none of this is even needed, as it's going to be useless and trivial to bypass.

I disagree. It's about building defense in depth. The more hoops an attacker has to jump to attack you, the slower and harder it gets for them. Anyways, I am happy to hear if you have a constructive solution. Otherwise, this discussion is simply leading nowhere.

Kernel runtime security instrumentation

Posted Sep 7, 2019 22:04 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> We are doing performance comparisons and it's not.
Then improve it. Translate audit rules into BPF and run them.

> It's got to do with your statement: "If you have a "vulnerable binary" then why the hell it's not deleted?"
How do you recognize that a binary was used for nefarious purposes?

> Environment variables is one use case where one needs to use a signal that audit does not currently provide.
Then extend it, rather than create a completely new system. Is there anything else that is not covered by audit subsystem and that is not a trivial addition?

> I disagree. It's about building defense in depth. The more hoops an attacker has to jump to attack you, the slower and harder it gets for them. Anyways, I am happy to hear if you have a constructive solution. Otherwise, this discussion is simply leading nowhere.
The constructive solution is simple - improve audit subsystem instead of adding more eBPF.

Kernel runtime security instrumentation

Posted Sep 7, 2019 22:17 UTC (Sat) by kpsingh (subscriber, #112411) [Link]

> We are doing performance comparisons and it's not.
> Then improve it. Translate audit rules into BPF and run them.

Feel free to go that route and suggest / make improvements to audit. Audit does not meet our other key requirement of having the MAC and signaling (auditing) possible with a single API, which is something that you are not constrained by (based on your comments)

Kernel runtime security instrumentation

Posted Sep 11, 2019 5:17 UTC (Wed) by ssmith32 (subscriber, #72404) [Link] (1 responses)

If you don't isolate the host, even ignoring security concerns, you're just kinda being a jerk, because you're knowingly providing connected resources to a bad actor. Yes, caveats may apply, but, in general, you should isolate it. Note that trying to justify not isolating it by wanting to gather more info for a more for an exciting security blog/research paper does not qualify as a caveat..

Kernel runtime security instrumentation

Posted Sep 11, 2019 5:48 UTC (Wed) by cpitrat (subscriber, #116459) [Link]

I'd expect some kind of justification when you call jerks a significant number of security researchers who use honeypots.

Otherwise, I can do it too:
If you don't isolate the host, you're not being a jerk.

Ok you said: "because you're knowingly providing connected resources to a bad actor." But anybody can have connected resources and it's very cheap. Look, I'm using one to answer you.

If you think about a botnet of honeypots, I think your either overestimating the number of honeypots, their lifespan or underestimating the number of hosts required for a useful botnet.

Kernel runtime security instrumentation

Posted Sep 11, 2019 4:54 UTC (Wed) by ssmith32 (subscriber, #72404) [Link]

Sooo... a "Safe Mode". Just hold insert while it boots!

Yeah, cheap shot, but toooo easy. I'll be quiet now.

Kernel runtime security instrumentation

Posted Sep 8, 2019 6:48 UTC (Sun) by jezuch (subscriber, #52988) [Link] (2 responses)

Well, my thought was that you would investigate while limiting potential damage so that you don't alert the attackers so that you have some more time to identify them. But I'm not a security expert and this sounds dangerous even to me so I don't expect this to be a plausible scenario.

Kernel runtime security instrumentation

Posted Sep 8, 2019 7:08 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

It's pretty clear that the idea here is to create something like Windows antiviruses, an automatic tool to detect malicious patterns and try to counteract them.

Unfortunately, the patch authors don't seem to have nearly enough experience with that kind of stuff. Modern Windows antiviruses have multiple layers or defenses, they intrude into the very heart of the OS. Windows itself scans and checksums its internal control structures (PatchGuard, CodeIntegrity), and antiviruses tune it up to 11. Which is kinda awe inspiring - it's like watching CoreWar.

Yet it's still not enough. All the OS protections have been bypassed ( https://www.symantec.com/content/dam/symantec/docs/securi... ) and malware now routinely bypasses antiviruses. This is because attacks don't get worse, they always keep getting better.

Kernel runtime security instrumentation

Posted Sep 11, 2019 5:05 UTC (Wed) by ssmith32 (subscriber, #72404) [Link]

Ok, I'm not sure how relevant it is, but that paper was from over 10 years, when Symantec was all bent out of shape that Microsoft's drivers were going to be able to do things it's drivers - written largely without code review and QA - couldn't.

Some rumblings about anti trust later, an API was provided, Symantec realized windows was a dying revenue stream, and you haven't seen much work in the area since. So it's a bit of an unknown.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds