CVE-less vulnerabilities
CVE-less vulnerabilities
Posted Jun 27, 2019 23:06 UTC (Thu) by rra (subscriber, #99804)Parent article: CVE-less vulnerabilities
> It's my understanding that while it's often possible to bypass those, doing so in non-scripting scenarios (e.g. in an image parser) is really hard and often impossible.
This is a (common) misunderstanding of how people use image parsing programs in practice. There is quite a lot of fully automated use out there where an attacker has essentially unlimited attempts. Think of thumbnailing of uploaded images, image resizing of profile pictures, and so forth.
Image parsers are the most common example (they are used a lot, probably more than most people expect), but versions of this misunderstanding crop up a lot. I remember recently when some people were surprised by a CVE in the file utility, thinking that at worst file might be called interactively on untrusted input, and several people said no, there were places where file was being called inside automation and an attacker could keep trying until they succeeded.
Command-line tools are useful, and UNIX is good at plugging them together in pipelines to make automated tools, and as a result, it's rare to find a program that you can guarantee is never run unattended, repeatedly, on untrusted input. You should assume all software will be run this way unless you can prove it's not.
Personally, I think it's no longer an acceptable security practice to run an image parser on untrusted input outside of a sandbox. We should keep fixing bugs in the parsers as a first line of defense, but that line of defense clearly fails frequently enough that one needs a second line of defense (a seccomp sandbox, a dedicated container, something). That's probably true of most parsers that are even mildly complex. Recent history says that it's true even of the file utility.
Posted Jun 28, 2019 15:07 UTC (Fri)
by rweikusat2 (subscriber, #117920)
[Link] (23 responses)
This kind of magic thinking keeps amazing me: In order to guard against unknown bugs in 'complex' software A, we must use significantly more complex software B which is believed to be free of unknown bugs because of ... well just because ...
Posted Jun 28, 2019 15:56 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link] (21 responses)
That is a not an earnest attempt at understanding a different perspective. All software including sandboxes can have bugs. However it is easier to specify access points and fix any bugs once in the sandbox that is specifically designed with security in mind compared to parsers which often have a long legacy and history of security vulnerabilities. We do have sufficient experience with sandboxes (for ex: Chrome sandbox or bubblewrap) to know both a) sandbox are not a magic solution 2) they do help to stop or mitigate vulnerabilities.
Posted Jun 28, 2019 17:34 UTC (Fri)
by rweikusat2 (subscriber, #117920)
[Link] (20 responses)
One should really call this a higher-order logical fallacy as it uses the exact, same falllacious reasoning to justify two contradicting conclusions.
Posted Jun 28, 2019 17:51 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
You're missing that the parser was written to "get a job done" which is to make a set of pixels from encoded data. The sandboxing software was written to get a job done as well: limit what can be done within an environment meant to run untrusted code. The mindset and review rigor difference between the two should be readily apparent in that I'd expect the latter to actually have a security model and documentation on how it handles that model. The former rarely has one until it has systemic problems.
Posted Jun 28, 2019 17:52 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
Of course there is but in this case, you have already made up your mind and that's fine. I will just point out that in general, the industry trend is moving towards more sandboxes, not less based on empirical evidence that it is useful. You are just the outlier here.
Posted Jun 28, 2019 18:00 UTC (Fri)
by rweikusat2 (subscriber, #117920)
[Link] (1 responses)
Posted Jun 28, 2019 18:10 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link]
How is empirical evidence an appeal to popularity? Just look at vulnerabilities in parsers vs sandboxes that protect them. It isn't that hard. If you are going to have the outlier position that goes against that evidence, make your case.
Posted Jun 28, 2019 18:02 UTC (Fri)
by excors (subscriber, #95769)
[Link]
When we know the parser has had several hundred security issues in the past (see https://www.cvedetails.com/vulnerability-list/vendor_id-1... , most of which say "...via a crafted file"), we can be pretty sure it's going to have a lot more.
> We don't know if the sandboxing software has security issues, hence, it probably doesn't.
The sandbox doesn't need to be perfect. To exploit a sandboxed parser, you need to find a bug in the parser *and* a bug in the sandbox. That's strictly harder than finding a bug in the parser, so the sandbox makes the system more secure. It's like the most obvious example of defense in depth.
Posted Jun 28, 2019 20:25 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Posted Jun 28, 2019 21:11 UTC (Fri)
by rweikusat2 (subscriber, #117920)
[Link] (2 responses)
But that's really entirely besides the point. The base situation is still "it is unknown if $parser has security issues, hence it probably has" vs "it's unkown if $sandbox has security issues, hence, it's probably safe to use". These are still two appeals to ignorance used to justify both a statement "A" and the inverse statement "not A".
NB: I do not claim that the parser must be safe or that the sandbox must be unsafe, only that absolutely nothing can be validly concluded from ignorance.
Posted Jun 28, 2019 22:39 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Just look at libpng, libgzip, libjpeg, Apache, libcurl, ... for examples. Every one of them had multiple severe vulnerabilities in parsing.
So if you are using a C-based parser for untrusted data (which in this world means "all data") then you have at least one security hole.
Sandboxes are a way to mitigate this. They still can have bugs but there is just a handful of sandboxes, and some of them are written in languages that preclude most of regular C bugs.
Of course, if you want to be in the perfect world then you should turn off all computers tomorrow and start a Manhatattan-project style mass rewrite of C software in saner languages.
Posted Jun 30, 2019 10:24 UTC (Sun)
by nix (subscriber, #2304)
[Link]
You're suggesting that finding more bugs in some piece of software is evidence that it has fewer bugs than other software in which fewer bugs were found in the first place, and you don't think this might perhaps be motivated reasoning so contorted that you could probably find singularities in it?
Thousands of image and other media parser bugs have been found (and continue to be found), even discounting the endless flood from Wireshark's network protocol dissectors. How many bugs have been found in Chrome's sandbox, which has annual events in which people basically get paid for finding holes in it? Is it as many as a dozen?
Posted Jun 29, 2019 17:55 UTC (Sat)
by sorokin (guest, #88478)
[Link] (10 responses)
I would like to add two more points:
Posted Jun 29, 2019 18:39 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
> 2. In addition it is very easy to write tests for parsers, because normally they behave like pure functions: they have some input and they produce some output.
Posted Jun 30, 2019 7:47 UTC (Sun)
by sorokin (guest, #88478)
[Link] (8 responses)
OS kernel<->userspace boundary is multithreaded. x86 VMs are multithreaded. Managed languages VMs are multithreaded.
> As history has shown, this is not enough.
Enough for what? I'm telling that one type of programs is easy to write tests for and one is not.
Posted Jun 30, 2019 7:51 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
> Enough for what? I'm telling that one type of programs is easy to write tests for and one is not.
Posted Jun 30, 2019 10:17 UTC (Sun)
by sorokin (guest, #88478)
[Link] (6 responses)
We were talking about sandboxing. Do you know any examples where people rely on python nor nodejs security to run untrusted code? I don't know of any.
I can not comment about Chrome though. I know that it is multithreaded, but perhaps its sandbox runs single-threaded code. I don't know. Ok perhaps this is a example of single-threaded sandboxed environment, still it is an exception. And overtime we will see fewer and fewer single-threaded programs.
> Enough to avoid security bugs. The only effective way to de-louse the C programs is fuzz testing, everything else simply doesn't work. And fuzzying can't find everything.
As well as you need fuzzing to test sandboxes. I would say for now fuzzing is the most effective way of finding bugs in programs.
Posted Jun 30, 2019 17:48 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
> I can not comment about Chrome though.
You also seem to not understand the whole concept of sandboxes. Sandboxes limit the operations that the contained code can do, whether it's multithreaded or not. And it's not like regular uncontained libraries can't access threads by themselves, either directly or through injected exploit code.
> As well as you need fuzzing to test sandboxes. I would say for now fuzzing is the most effective way of finding bugs in programs.
The second most powerful way is to contain code in sandboxes.
Posted Jul 1, 2019 21:13 UTC (Mon)
by samuelkarp (subscriber, #131165)
[Link]
Hey, thanks for the shout-out! This is somewhat of a sidebar, but hopefully I can help a bit. While Firecracker is used in production at AWS, it's a relatively young project. We've certainly tried to build Firecracker in a secure way (Rust is one part of that, a very limited device model is another part, and limiting privileges by default with seccomp and jailing is another), but the short amount of time it has been open-source means the security community hasn't had a lot of time to necessarily identify any issues in it. I think this may be a much stronger argument in the future as the project continues to mature and (hopefully!) security-minded folks have an opportunity to look deeper.
(disclosure: I work for AWS and work with the Firecracker team; I am helping to build firecracker-containerd, a project focused on managing container workloads inside Firecracker microVMs.)
Posted Jul 4, 2019 17:59 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
> No. The most effective way is to write software in safe languages. E.g. a sandbox in Rust: https://github.com/firecracker-microvm/firecracker - they had no CVEs so far.
The only guaranteed secure way to write secure software is to prove it correct. Try doing that with C and its undefined behaviours - the end result will be quantum with bugs popping in and out of existence at random. And they do! I'm pretty sure there have been several occasions when an upgrade to GCC has caused bugs in software to pop INTO existence ... (because the compiler exploited a loophole in the definition of the language)
Cheers,
Posted Jul 1, 2019 9:44 UTC (Mon)
by james (subscriber, #1325)
[Link] (2 responses)
I know it's Perl, but I'd love the ability to run SpamAssassin in a sandbox (without making any complaints at all about either SpamAssassin or Perl security).
Posted Jul 1, 2019 17:35 UTC (Mon)
by sorokin (guest, #88478)
[Link] (1 responses)
I completely agree with that point. In many cases this can be a completely adequate security measure.
Actually my original comment was about testing. Looking back I regret that I even responded to Cyberax who is arguing just for the sake of arguing.
Posted Jul 4, 2019 18:03 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
And while "security through obscurity" is a bad thing to *rely* on - as a deliberate *extra* feature on top of other measures it *is* good. Run a parser inside a sandbox on a hardened kernel - the attacker has to first discover the security measure before he can attack it, which gives you extra opportunity to discover *him*.
Cheers,
Posted Jul 8, 2019 5:10 UTC (Mon)
by HelloWorld (guest, #56129)
[Link]
CVE-less vulnerabilities
> needs a second line of defense (a seccomp sandbox, a dedicated container, something). That's probably true of most parsers that
> are even mildly complex.
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
You clearly live in a separate parallel reality. The never-ending list of critical CVEs in parsers is probably the longest list of vulnerabilities.
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
CVE-less vulnerabilities
1. Most parsers are single-threaded while most sandboxing environments have to deal with some forms of concurrency. Which means they are difficult to test.
2. In addition it is very easy to write tests for parsers, because normally they behave like pure functions: they have some input and they produce some output.
CVE-less vulnerabilities
Not really. Most of sandboxing environments are thread-agnostic or single-threaded. Unless you're looking at full VMs.
As history has shown, this is not enough.
CVE-less vulnerabilities
CVE-less vulnerabilities
No. Quite a few sandboxes prohibit launching new threads, including the one used in Chrome, for example. Managed VMs can also be single-threaded just fine - see Python or Nodejs.
Enough to avoid security bugs. The only effective way to de-louse the C programs is fuzz testing, everything else simply doesn't work. And fuzzying can't find everything.
CVE-less vulnerabilities
CVE-less vulnerabilities
You mentioned managed languages first. So stop moving goalposts.
The sandboxed code is single-threaded.
No. The most effective way is to write software in safe languages. E.g. a sandbox in Rust: https://github.com/firecracker-microvm/firecracker - they had no CVEs so far.
CVE-less vulnerabilities
CVE-less vulnerabilities
Wol
CVE-less vulnerabilities
We were talking about sandboxing. Do you know any examples where people rely on python nor nodejs security to run untrusted code? I don't know of any.
Well, this thread started with rra saying:
Personally, I think it's no longer an acceptable security practice to run an image parser on untrusted input outside of a sandbox.
illustrating that sandboxing isn't just for untrusted code -- it's also for mostly-trusted code that is likely to handle hostile data (and where you might not totally trust the language sandbox).
CVE-less vulnerabilities
CVE-less vulnerabilities
Wol
CVE-less vulnerabilities
