|
|
Subscribe / Log in / New account

It also depends on your security model

It also depends on your security model

Posted Feb 27, 2018 14:41 UTC (Tue) by abatters (✭ supporter ✭, #6932)
In reply to: Shedding old architectures and compilers in the kernel by james
Parent article: Shedding old architectures and compilers in the kernel

My company deploys Linux in embedded x86-64 appliances that never run any untrusted 3rd-party code, so the whole spectre/meltdown thing is irrelevant to those systems even though the underlying arch is vulnerable. So for our use case we don't care about retpolines even on x86-64.


to post comments

It also depends on your security model

Posted Feb 27, 2018 17:56 UTC (Tue) by rahvin (guest, #16953) [Link] (8 responses)

If the appliance accepts a network connection any bugs in your software could make spectre/meltdown a real risk for a determined hacker, particularly if they are exposed to the broader internet. All you need is a buffer overflow or other bug that would allow a network service to insert a command that is then executed. Spectre and Meltdown are the bugs that are going to continue to give for a very long time.

It also depends on your security model

Posted Feb 27, 2018 18:40 UTC (Tue) by pbonzini (subscriber, #60935) [Link] (7 responses)

There are many cases in which breaking $networkprocess is just as bad as getting root. (There are also many cases in which everything that you worry about is running as root, though these are less likely to run on a core with speculative execution). In both cases, Meltdown and Spectre are not really much to worry about.

It also depends on your security model

Posted Feb 28, 2018 18:54 UTC (Wed) by clump (subscriber, #27801) [Link]

We can only speculate(!) about abatters use case. You need to think about risks you can't forsee. If you don't use a network, can a network be created? Did you still include a network stack in your embedded OS? Your use case might not include peripherals but can peripherals be attached? Are you carrying software that's not used in your product?

You can't protect against everything but you can take precautions. Some performance impacting mitigations on later kernels can be disabled, but having them opt-out by default is reasonable.

It also depends on your security model

Posted Mar 1, 2018 1:28 UTC (Thu) by rahvin (guest, #16953) [Link] (4 responses)

I disagree, the capability of Meltdown/Spectre to read out the entire memory stack makes them far and away some of the most dangerous bugs. We need to keep in mind that only the very first exploits have rolled on these and because Spectre is fundamental to the design of almost all modern processors it's going to be the bug that continues to give for a very long time.

IMO It's simply a matter of time with Spectre, and the timing attacks that it's based on, before we get the remote exploit that makes it the single biggest threat to computing. Hopefully I'm wrong.

It also depends on your security model

Posted Mar 3, 2018 2:47 UTC (Sat) by himi (subscriber, #340) [Link] (3 responses)

I think the point was that if all your code is running as root, /any/ security vulnerability will open up access to any and all memory on the system - meltdown and spectre don't add any additional problems.

It also depends on your security model

Posted Mar 3, 2018 19:12 UTC (Sat) by nix (subscriber, #2304) [Link] (2 responses)

Quite. If everything important is running as one uid (as is true of most single-user setups!) both Spectre and Meltdown are also unimportant, because the attack doesn't need to use either when it could just ptrace() far more cheaply.

It also depends on your security model

Posted Mar 3, 2018 20:56 UTC (Sat) by excors (subscriber, #95769) [Link] (1 responses)

Unless you're running e.g. untrusted scripts in a sandboxed interpreter. Even if everything is running as root, you expect the sandbox to prevent the scripts accessing arbitrary memory. With Spectre you lose that guarantee. ("Script" could mean JS or eBPF or pretty much any kind of data that can cause the program to do a bounds-checked array access, which might include data that doesn't really look anything like a script at all.)

It also depends on your security model

Posted Mar 4, 2018 14:04 UTC (Sun) by nix (subscriber, #2304) [Link]

Yeah, the old guarantees that language purity could enforce security appear pretty much dead now. The implementation always needs to do extra work to keep people from reaching out beyond the bounds of the implementation via side channels like this -- and we don't know what new side channels may exist in the future, or may exist now yet be unknown to us :(

It also depends on your security model

Posted Mar 1, 2018 12:49 UTC (Thu) by matthias (subscriber, #94967) [Link]

You should keep in mind that at least in theory, Spectre can be exploited remotely: Send crafted network packets to train the speculation engine and afterwards extract information based on timing of future packets.

I do not expect this to be a problem right now, as it will be even harder than a local exploit and the bandwidth of the hidden channel will be much lower than for a local attacker. The SNR will be much worse, due to increased noise. But it is not as easy as saying "If $networkprocess is broken, then we have lost anyway". Spectre works even without breaking any process.

It also depends on your security model

Posted Mar 2, 2018 1:34 UTC (Fri) by gerdesj (subscriber, #5446) [Link] (12 responses)

"My company deploys Linux in embedded x86-64 appliances that never run any untrusted 3rd-party code, so the whole spectre/meltdown thing is irrelevant to those systems even though the underlying arch is vulnerable. So for our use case we don't care about retpolines even on x86-64."

What exactly are you contributing to the discussion here? At best you are correct about all of your assertions and at worst some clever bugger will prove you very, very wrong on one or two of them.

Personally I always attack problems assuming I'm stupid. If you find yourself promoting terms like "best practice" (you didn't but you came close) then you are doomed, I tell you, doomed. You may be a clever person but there is always someone else that is better than you and our colleagues here will include some of them - they may point out the flaw in your reasoning. The baddies won't.

That's the way of the sys 'n' network admin as I see it. No tin foil, just simple scepticism. Perhaps you are the dog's bollocks when it comes to IT security but I note that your firm were not involved in this disclosure (or were you?), so were just as surprised as the rest of us (sort of, see papers from ~1993 onwards.)

Me? I'm stupid, always have been and always will be.

It also depends on your security model

Posted Mar 2, 2018 13:01 UTC (Fri) by intgr (subscriber, #39733) [Link] (7 responses)

I agree with the rest of your comment but...

> If you find yourself promoting terms like "best practice" (you didn't but you came close) then you are doomed, I tell you, doomed.

What's wrong with "best practice"? The way I see it, a best practice documents general conventions that are a good idea to follow "stupidly" as you describe. Applying all security updates periodically is one such best practice, even if you think they don't apply to your use case. As opposed to cherry-picking patches that you think apply -- because you're bound to miss some.

It also depends on your security model

Posted Mar 2, 2018 13:34 UTC (Fri) by farnz (subscriber, #17727) [Link]

The problem with "best practice" is that it comes without a rationale, and thus documents things that you "must" do, but not why you must do them. Thus, when advice in a best practices document becomes wrong (e.g. due to external changes), there's no way to even notice that it is wrong, let alone fix it.

It also depends on your security model

Posted Mar 3, 2018 0:13 UTC (Sat) by gerdesj (subscriber, #5446) [Link] (1 responses)

"What's wrong with "best practice"?"

There be bad practice and there be good practice. As soon as I see anyone claiming that they know what is "best practice" I briefly lose the will to live. You said it yourself in a way: "The way I see it, a best practice documents general conventions that are a good idea to follow" "good idea to follow" - here you dilute "best" to "good idea".

I am being very pedantic with this point, I know. I doubt that many people here have ever written documentation that claimed to be *shudder* "a best practice". There are a lot of opinionated folk here (me included) and quite a lot of "I'm right and you are talking twaddle" style bike shedding err ... discussion, but rarely, if ever, do I see an invocation of "best practice". I do often see that phrase elsewhere rather a lot and I generally use it as a litmus test for possible but unlikely guide to solution of the problem at hand, which is a bit sad.

I suppose that in the end I'm a bit of a grammar fiend: good, better, best - adjective, comparative, superlative. Soz!

It also depends on your security model

Posted Mar 3, 2018 19:13 UTC (Sat) by nix (subscriber, #2304) [Link]

What about calling it 'best current practice' like the RFCs do, then? That doesn't carry the ridiculous implication that this practice is best for all time, the best ever, rah rah!

It also depends on your security model

Posted Mar 3, 2018 6:21 UTC (Sat) by cladisch (✭ supporter ✭, #50193) [Link]

User Rosinante on the Meta Stack Exchange expressed it best:

While you may be an exemplary, clear-thinking individual, who uses the term best practice in a constructive manner, you have been preceded by a giant procession of zombies who use it as the antithesis of thought. Instead of understanding the important specifics of their situation and looking for an appropriate solution, all they want is to spot a herd in the distance and go trotting off after it.

Thus, the term best practice has been rendered an extremely strong signal of an empty resonant cavity in the place where a brain should be, and questions that mention the phrase get closed.

It also depends on your security model

Posted Mar 8, 2018 14:17 UTC (Thu) by Wol (subscriber, #4433) [Link] (2 responses)

> I agree with the rest of your comment but...

> > If you find yourself promoting terms like "best practice" (you didn't but you came close) then you are doomed, I tell you, doomed.

Not often, but some things are clearly best practice. Like what I documented :-) at a previous employer ...

"Switch compiler warnings to max, and if you can't fix them you need to explain them".

:-) :-)

Cheers,
Wol

It also depends on your security model

Posted Mar 8, 2018 15:00 UTC (Thu) by excors (subscriber, #95769) [Link] (1 responses)

> Not often, but some things are clearly best practice. Like what I documented :-) at a previous employer ...
>
> "Switch compiler warnings to max, and if you can't fix them you need to explain them".

But that isn't clearly best. In some compilers, the maximum 'warning' level (e.g. Clang's -Weverything) includes various diagnostic messages that are merely pointing out something the compiler noticed about your code, without implying there's really any problem with it. Perhaps those messages are useful in niche situations, or they were requested by one customer with a bizarre coding style, or a bored compiler developer added them because they could. It's not worth your time to modify your code to avoid those warnings, and it's probably not even worth your time going through the list of the hundreds of available messages and selectively disabling them. The compiler developers have already produced a list of reasonably useful warnings and called it -Wall or something, so that's usually a better place to start.

Maybe "set everything to max" is good advice when -Wall *is* the max. But if you just state that advice without any rationale, it will become poor advice when people switch to a compiler with many less-useful warnings.

It also depends on your security model

Posted Nov 1, 2018 5:23 UTC (Thu) by brooksmoses (guest, #88422) [Link]

Also, of course, there's the "which compiler?" question. Once you start getting into esoteric "extra" warnings, it's pretty common to end up with situations where avoiding a warning in one compiler will cause a warning in another.

It also depends on your security model

Posted Mar 2, 2018 17:13 UTC (Fri) by abatters (✭ supporter ✭, #6932) [Link] (3 responses)

In a discussion of dropping support for old compilers simply because they lack retpoline support, the point, of course, was that since retpoline support may not be essential to everyone, keeping support for the old compilers might still be useful to some people even for x86-64. Upgrading development environments is a PITA.

I have found the responses to my comment very insightful. I am glad I posted my comment, if only to get other people to share their perspectives. Let me summarize:

1) If a system does not allow 3rd-party code to be installed or run, and is not network-attached, then Meltdown/Spectre might not matter.

2) If a system does not allow 3rd-party code to be installed or run, and is network-attached, then Spectre might be remotely exploitable by network packet timings, although this has yet to be demonstrated and is only theoretical.

3) If a system does not allow 3rd-party code to be installed or run, and is network-attached, and an attacker manages remote code injection into a network process, then Meltdown/Spectre can be used for further compromise. In some cases, compromising an important network process is already "game over" so Meltdown/Spectre would not be of any use, but for other cases, Meltdown/Spectre could be used to gain complete control after compromising a low-privilege network process.

It also depends on your hardware

Posted Mar 3, 2018 13:53 UTC (Sat) by anton (subscriber, #25547) [Link] (2 responses)

keeping support for the old compilers might still be useful to some people even for x86-64.
In addition to those where Meltdown/Spectre are irrelevant because there is no attack scenario where they worsen the situation, there are also x86-64 CPUs that are not vulnerable to Meltdown/Spectre (in particular, those based on Bonnell), and hopefully we will see many more CPUs without these vulnerabilities in a few years.

It also depends on your hardware

Posted Mar 8, 2018 20:20 UTC (Thu) by landley (guest, #6789) [Link] (1 responses)

Linux on nommu still exists. If a system without spectre/meltdown mitigation can't possibly be viable to run, you're expressing a pretty strong opinion about those too. (There are more nommu processors than with-mmu processors on the planet about the same way there are more bacterial cells than mammalian cells. This is the dark matter of the hardware world.)

Half the embedded developers already tend to stay several versions behind on everything (kernel, compiler, libc, userspace), as in it took them several years to notice upstream uClibc was dead and move to musl-libc or the uclibc necromancy fork. That's why I had a http://landley.net/toybox/faq.html#support_horizon in busybox (and now toybox): clear policy with leeway for old systems but also a defined "ok, that's enough, you can move now" cutoff.

If you remove support for all but the newest toolchains, they'll just stop upgrading the kernel in their sdk, as in they'll develop _new_ devices against old stuff. (It's already enough of a pain to convince embedded developers to stop developing new devices based on a 2.6 kernel they have lying around. Make your policies too tight, you lose traction.)

Rob

It also depends on your hardware

Posted Mar 8, 2018 21:09 UTC (Thu) by excors (subscriber, #95769) [Link]

I believe nommu systems still need Spectre mitigation, as long as they do speculative execution. Something like a JavaScript interpreter that is designed to run untrusted code in a sandbox ought to be safe on a nommu system, but Spectre still makes it vulnerable exactly like with-mmu.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds