Unmaintained filesystems as a threat vector
HFS (and HFS+) in the kernel
Back in January, the syzbot fuzzing system reported
a crash with the HFS filesystem. For those who are not familiar with HFS,
it is the native filesystem used, once upon a time, by Apple Macintosh
computers. Its kernel
configuration help text promises that users "will be able to mount
Macintosh-formatted floppy disks and hard drive partitions with full
read-write access
". It seems that, in 2023, there is little demand for
this capability, so the number of users of this filesystem is relatively
low.
The amount of maintenance it receives is also low; it was marked as orphaned in 2011, at which point it had already seen some years of neglect. So it is not all that surprising that the syzbot-reported problem was not fixed or, even, given much attention. At the end of the brief discussion in January, Viacheslav Dubeyko, who occasionally looks in on HFS (and the somewhat more modern HFS+ filesystem as well), said that there was nothing to be done in the case where a filesystem has been deliberately corrupted.
On July 20, Dmitry Vyukov (who runs syzbot) restarted
the discussion by pointing out that the consequences of a bug in HFS
can extend beyond the small community of users of that filesystem: "Most
popular distros will happily auto-mount HFS/HFS+ from anything inserted
into USB (e.g. what one may think is a charger). This creates interesting
security consequences for most Linux users
".
There is an important point in that message that is worth repeating: users
may not be aware that the device they are plugging into their computer
contains a filesystem at all. One often sees warnings about plugging
random USB sticks into a computer, but any device — or even a charging
cable — can present a block device with a filesystem on it. If the
computer mounts that filesystem automatically, "interesting security
consequences
" may indeed follow.
The new round of discussion still has not resulted in the problem being
fixed. Instead, some developers called for the removal of the HFS and HFS+
filesystems entirely. Matthew Wilcox said:
"They're orphaned in MAINTAINERS and if distros are going to do such a
damnfool thing, then we must stop them
". Dave Chinner argued
that the kernel community needs to be more aggressive about removing
unmaintained filesystems in general:
We need to much more proactive about dropping support for unmaintained filesystems that nobody is ever fixing despite the constant stream of corruption- and deadlock- related bugs reported against them.
Linus Torvalds, though, was
unimpressed, saying that, instead, distributors should just fix the
behavior of their systems. The lack of a maintainer, he added, is not a
reason to remove a filesystem that people are using; "we have not
suddenly started saying 'users don't matter'
". That brought the
discussion to an end, once again, with no fix for the reported bug in
sight.
Distribution changes
As the conversation was reaching an end on the linux-kernel list, it picked up on debian-devel. There, Marco d'Itri asked the kernel developers to simply blacklist HFS and HFS+ from being used for automounting filesystems. Matthew Garrett, though, pointed out that the kernel, which cannot completely block automounting without disabling the filesystem type entirely, was probably the wrong place to solve the problem. Instead, he suggested, a udev rule could be used to prevent those filesystems from being automounted, while keeping the capability available for users who manually mount HFS or HFS+ filesystems.
Shortly thereafter, Garrett raised the issue on the Fedora development list as well, suggesting the addition of a udev rule once again. There, some participants saw that rule as perhaps improving the situation, but others, including Zbigniew Jędrzejewski-Szmek and Michael Catanzaro, pointed out that, if a user wants to see the files contained within a a filesystem image, they will do what is needed to mount it, even if that mounting does not happen automatically. Solomon Peachy suggested that adopting this policy would only result in an addition to the various "things to fix after installing Fedora" lists telling users how to turn automounting back on.
Nobody mentioned the possibility that the user was not expecting a given device to have a filesystem at all. Forcing such a filesystem to be mounted manually would presumably address that problem since, presumably, most users would not go to the trouble of mounting a filesystem that they did not expect to be there in the first place. But, as Demi Marie Obenour pointed out, a malicious filesystem image could be employed willingly by a user to take control of a locked-down system:
Unfortunately, this original threat model is out of date. kernel_lockdown(7) explicitly aims to prevent root from compromising the kernel, which means that malformed filesystem images are now in scope, for all filesystems. If a filesystem driver is not secure against malicious filesystem images, then using it should fail if the kernel is locked down, just like loading an unsigned kernel module does.
In that case, it seems, disabling automounting would not be a sufficient fix; the vulnerable filesystem type would need to be disabled entirely.
There is an aspect of the problem that has not received as much attention as it might warrant, though Eric Sandeen did touch on it: the number of filesystem implementations in Linux that are robust in the face of a maliciously corrupted image is quite close to zero. Many filesystems can deal with corruption resulting from media errors and the like; checksums attached to data and metadata will catch such problems. Malicious corruption, instead, will have correct checksums, entirely bypassing that line of defense. Filesystem developers who have thought about this problem are mostly unanimous in saying that it cannot readily be solved — the space for possible attacks is simply too large.
So, while unmaintained filesystems like HFS may provide a sort of low-hanging fruit for attackers, they are not the sole cause of the problem. Intensively maintained filesystems, including ext4, Btrfs, and XFS, are also susceptible to malicious filesystem images. So even removing support entirely for the older, unmaintained filesystem types would not solve the problem.
In the Debian discussion, Garrett suggested
that risky filesystems could be mounted as FUSE filesystems in user space,
thus making it much easier to contain any ill effects — "but even though
this has been talked about a bunch I haven't seen anyone try to implement
it
". On the Fedora side, Richard W. M. Jones suggested
that libguestfs, which mounts
filesystems within a virtual machine, could be used. Once again, that
would contain the results of any sort of exploitation attempt.
If the objective is truly to make it safe for users to mount untrusted
filesystems, some sort of isolation will almost certainly prove to be
necessary. Making most filesystem implementations robust against malicious
filesystem images just does not seem to be an attainable goal in the near
future — even if resources were being put toward that goal, which is not
happening to any great extent. It is not a simple solution, and the result
will have a performance cost, but security often imposes such costs.
Index entries for this article | |
---|---|
Kernel | Filesystems/Security |
Kernel | Security |
Posted Jul 28, 2023 16:44 UTC (Fri)
by KJ7RRV (subscriber, #153595)
[Link] (6 responses)
[citation needed]
I would expect that most users would see an unexpected filesystem and think, "Hmm, what's this?" and mount it. More security-conscious users wouldn't, but I think most users who would plug in an untrusted device in the first place would probably mount an unexpected filesystem.
Posted Jul 28, 2023 17:06 UTC (Fri)
by hkario (subscriber, #94864)
[Link]
Posted Jul 28, 2023 18:24 UTC (Fri)
by Nahor (subscriber, #51583)
[Link] (3 responses)
Posted Jul 28, 2023 23:52 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link] (2 responses)
There are reasons for wanting an ordinary USB device to also have a small block device, too. If it's an unusual device, the block device could contain drivers or other software needed to make it work. It's a totally legitimate thing to do; it even protects the buyer against the company no longer providing the software on their web site. Not to mention that USB is designed to allow users to daisy chain devices, so it would be totally normal for plugging in a single cable to add multiple devices to the system simultaneously.
Posted Jul 29, 2023 1:06 UTC (Sat)
by pizza (subscriber, #46)
[Link] (1 responses)
FWIW, I own two such devices, one is a cellular modem and the other is a label printer.
There are also various microcontroller development boards that present as a mass-storage device when plugged in -- This filesystem is completely fake/virtual, and exists to allow the device firmware to be updated without requiring any additional tools (or permissions)
> Not to mention that USB is designed to allow users to daisy chain devices, so it would be totally normal for plugging in a single cable to add multiple devices to the system simultaneously.
Modern laptop docks are nearly exclusively set up this way. But even putting those aside, I also own multiple Hub+card reader widgets that I use nearly daily, so plugging stuff in that presents a mass storage device is something completely routine.
Remember, "Of course I want to access that device; it's why I plugged it in!" is the overwhelmingly common use case here, and we have to remember to not throw up usability impediments or we'll just have users disable/bypass these mechanisms. By all means, let's harden things as much as possible, including sandboxing (eg via libguestfs), but that has to all be automatic and completely transparent for it to be viable. Security is meaningless if it results in an unusable system.
Posted Jul 31, 2023 19:12 UTC (Mon)
by rgmoore (✭ supporter ✭, #75)
[Link]
I have to think FUSE is the right way to deal with this kind of thing, at least as a default. Most removable media is relatively low performance, so the added overhead of the user space driver is a reasonable price to pay for better security. The rare case of high-performance removable media should be treated as the exception rather than the standard. The big problem is just that there aren't FUSE drivers for every filesystem, and the more obscure and less well maintained the kernel driver, the less likely it is there will be a FUSE implementation. What we need to make it work is a way of letting FUSE use kernel drivers.
Posted Jul 31, 2023 20:15 UTC (Mon)
by estansvik (guest, #127963)
[Link]
Posted Jul 28, 2023 18:12 UTC (Fri)
by shironeko (subscriber, #159952)
[Link] (15 responses)
Posted Jul 28, 2023 20:10 UTC (Fri)
by smoogen (subscriber, #97)
[Link]
The list goes on. Basically look at either any filesystem thread when someone comes up with a new filesystem to see all the 'crap' that normal filesystems have to deal with just to 'function' which eventually take a simple system into a very complex one with various features being thrown out because the complexity was just too high for 'normal' drives which are meant to be in a 'trusted' environment. Now try to add in all the corruptions they have to deal with removable media but try to come up with a way to make the system 'ensure' that the drive is both ok and not lethal.
I don't know any current papers on this.. so maybe someone has come up with a better way to deal with this
Posted Jul 30, 2023 6:01 UTC (Sun)
by flussence (guest, #85566)
[Link] (13 responses)
C experts still struggle to parse ASN.1 securely and that's a static target, half a century old and a few kilobytes in size. So far the best answer for _that_ is "invent an entirely new language where C mistakes cannot be expressed", and that's like trying to switch a national grid to 70Hz.
Posted Jul 30, 2023 7:57 UTC (Sun)
by mb (subscriber, #50428)
[Link] (2 responses)
That's not true. There are cheap and easy to handle "transformers" available. Called FFI.
Posted Aug 5, 2023 14:35 UTC (Sat)
by smammy (subscriber, #120874)
[Link] (1 responses)
Posted Aug 5, 2023 16:59 UTC (Sat)
by mb (subscriber, #50428)
[Link]
Posted Jul 30, 2023 19:04 UTC (Sun)
by DemiMarie (subscriber, #164188)
[Link] (9 responses)
Posted Jul 31, 2023 3:58 UTC (Mon)
by willy (subscriber, #9762)
[Link] (3 responses)
I'm not opposed to Rust, but the idea that we'll mandate rewriting everything I'm Rust is madness. It'll need to be piece-by-piece, with the push coming from the maintainers themselves, not by pushing the already overworked people who understand filesystems to also learn Rust.
Posted Jul 31, 2023 12:13 UTC (Mon)
by liw (subscriber, #6379)
[Link] (1 responses)
However, if there's appetite for a rewrite, that would be a good time to consider reviewing and documenting detailed requirements, how testing is done, and generally how to improve things. Adding comprehensive test suites would make sense before a rewrite is started, in any case.
(Blatant advertising: I give a basics of Rust training course, for a fee. I'm available, if there's interest. But I do userland, not kernel, Rust.)
Posted Jul 31, 2023 13:01 UTC (Mon)
by willy (subscriber, #9762)
[Link]
I don't need to be sold on the value of unit tests; I have many for the XArray. Where possible they can be run both in-kernel and in userspace. They're very important to me. I just don't know how useful they'd be to a filesystem. xfstests seems to cover all the ground that unit tests would, and while we could move that work into unit tests, it seems a lot like wasted time.
Darrick & Kent have been playing with gcov recently and they're hitting some reasonably high percentages -- 82.3% of lines for fs/xfs for example.
Posted Aug 5, 2023 6:19 UTC (Sat)
by jezuch (subscriber, #52988)
[Link]
Easy: in a scenario where rewriting everything in Rust is feasible, training a few dozen fs developers in Rust seems like the easiest part.
Posted Aug 1, 2023 14:32 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (1 responses)
Better than Rust would be to write the critical parts of filesystems in a constrained language similar to Wuffs. Done right, this ensures that any corruption of the filesystem data structures (malicious or otherwise) simply leads to the kernel misinterpreting the filesystem contents, but does not allow you to exploit bugs in the kernel (you might have permissions wrong on a file, or a file containing the metadata of another file, including ownership etc).
But that requires someone to invent the domain-specific language and maintain it, even if it "just" compiles down to C and Rust datastructures for in-kernel use.
Posted Aug 1, 2023 14:57 UTC (Tue)
by DemiMarie (subscriber, #164188)
[Link]
Posted Aug 1, 2023 14:59 UTC (Tue)
by DemiMarie (subscriber, #164188)
[Link]
Posted Aug 9, 2023 22:02 UTC (Wed)
by ebiederm (subscriber, #35028)
[Link] (1 responses)
A big danger is kernel stack overflow as the kernel stack size is limited.
Another danger is plausible but invalid filesystem state such as hard-linked directories. Perhaps appearing as a circular directory tree, that you can descend forever.
Three needs to be guarding against lock inversions, caused by plausible but invalid data structures.
The only idea I can think of that might make the problem tractable is to use a public/private key pair. With the public key used to verify the filesystem checksums. The private key would be needed to write them. That would at least allow fsck to be able to validate the filesystem.
But I seriously recommend FUSE with a filesystem driver hooked to a user mode linux kernel so all linux filesystems can be supported.
Posted Aug 10, 2023 1:21 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link]
Posted Jul 28, 2023 18:28 UTC (Fri)
by warrax (subscriber, #103205)
[Link] (8 responses)
A hostile file system mounted for the currently running user can still delete or exfiltrate ALL of the user's data if any sort of code execution exploit is possible. Or am I misunderstanding how FUSE works? It doesn't run as a 3rd (super-unprivileged) user, does it?
Posted Jul 28, 2023 19:01 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link]
Posted Jul 28, 2023 19:28 UTC (Fri)
by pizza (subscriber, #46)
[Link] (4 responses)
A point I've made many times now is that the overwhelmingly common threat vector is "malicious files on a well-formed filesystem" -- You don't need elevated privileges to exfiltrate/encrypt/delete files owned by the same user that double-clicked on the wrong file. Combine that with the increasingly-common passwordless sudo norm, and there's no need for "exploits" to take over the whole system.
Similarly, the overwhelmingly common usage scenario is "of course I want to access the files on this device; it's why I plugged it in!" Anything that requires the user to click "yes" or otherwise trigger a mount will get muscle-memoried into uselessness pretty rapidly.
Posted Jul 28, 2023 19:52 UTC (Fri)
by KJ7RRV (subscriber, #153595)
[Link] (3 responses)
Are there distros allowing passwordless sudo by default (‽), or is it just increasingly common for users to enable it?
Posted Aug 1, 2023 15:18 UTC (Tue)
by MarcB (guest, #101804)
[Link] (2 responses)
/dev/kmsg, i.e. dmesg, is the most kernel-related example - and also a pretty bad one, because there is absolutely no way to fix this with permissions (you would have to hope the data gets to /var/log/kern.log, or similar - but the system might have severe issues, which is why you are using dmesg in the first place).
This pattern is also very common for many modern applications. Unlike older ones, they make no use of group permissions.
I can see how people are tempted to relax sudo rules more and more, even though this is a very bad idea (the sudoers man page literally contains a "Quick guide to EBNF").
Posted Aug 30, 2023 9:51 UTC (Wed)
by rbtree (guest, #129790)
[Link]
When you try to elevate, instead of a password prompt you see a request to touch the hardware token (plus an optional PIN verification: unlike a good user's password, the PIN is typically short, and entering it incorrectly 6-10 times in a row resets the token to the factory state).
Note that you should really put the config somewhere in /etc and mark the file as owned by root. The default is to put it in ~/.config — then you can just add another key there and elevate.
https://wiki.archlinux.org/title/Universal_2nd_Factor
Posted Sep 6, 2023 11:26 UTC (Wed)
by daenzer (subscriber, #7050)
[Link]
journalctl —dmesg
works without sudo even if dmesg itself doesn’t.
Posted Jul 29, 2023 21:18 UTC (Sat)
by geofft (subscriber, #59789)
[Link] (1 responses)
And even in the two languages the kernel supports, it's much easier to use existing third-party libraries that can handle parsing robustly in userspace than in kernelspace (e.g. serde for Rust, there's a demo at https://github.com/Rust-for-Linux/linux/pull/1007 but I am guessing it's not realistic to see it in-tree any time soon), so you can avoid hand-rolled parsers that are more likely to be buggy. The overall development and testing experience is easier, too, which should help.
But also as mjg59 said in another comment, yes you can sandbox the FUSE process. Even if you're on a distro where LSMs aren't an easy choice, you should be able to get a lot of sandboxing out of seccomp and/or unprivileged user namespaces. Or you certainly _can_ run it as a third user if you'd like (you need the user_allow_other option in /etc/fuse.conf, which is mostly there to protect against hostile filesystems hanging and giving you a bad time, but it's a much lower threat than actually running code).
Posted Jul 30, 2023 14:35 UTC (Sun)
by Paf (subscriber, #91811)
[Link]
A good kernel compromise - it’s all in one address space and can read user memory. That’s it, game over.
Languages etc do matter as well but the bigger difference is the scope of a compromise.
Posted Jul 30, 2023 19:17 UTC (Sun)
by DemiMarie (subscriber, #164188)
[Link] (2 responses)
Posted Jul 30, 2023 19:40 UTC (Sun)
by Wol (subscriber, #4433)
[Link] (1 responses)
So it can simply overwrite kernel memory, and there's nothing the kernel can do about it ...
(So you're relying on getting the MMU to contain this potentially hostile peripheral.)
Cheers,
Posted Jul 31, 2023 3:59 UTC (Mon)
by willy (subscriber, #9762)
[Link]
Posted Jul 31, 2023 10:49 UTC (Mon)
by Fowl (subscriber, #65667)
[Link]
Posted Jul 31, 2023 14:11 UTC (Mon)
by dezgeg (subscriber, #92243)
[Link] (5 responses)
Getting some company to provide funding and/or manpower for that to be upstreamed would be a great step forward.
Posted Jul 31, 2023 15:18 UTC (Mon)
by DemiMarie (subscriber, #164188)
[Link] (2 responses)
Posted Aug 1, 2023 3:20 UTC (Tue)
by quotemstr (subscriber, #45331)
[Link] (1 responses)
Or just buggy? Why would WASM be inherently less vulnerable than JITing JavaScript engines, which do have escape bugs once in a while?
Posted Aug 8, 2023 22:06 UTC (Tue)
by njs (subscriber, #40338)
[Link]
Posted Aug 3, 2023 8:51 UTC (Thu)
by rwmj (subscriber, #5474)
[Link] (1 responses)
Posted Aug 4, 2023 10:26 UTC (Fri)
by dezgeg (subscriber, #92243)
[Link]
Posted Aug 3, 2023 8:54 UTC (Thu)
by rwmj (subscriber, #5474)
[Link]
Posted Aug 8, 2023 13:52 UTC (Tue)
by anarcat (subscriber, #66354)
[Link] (6 responses)
Why is a filesystem so fundamentally different? I hear there might be a performance impact on sanity checks, but the same could have been said of the network, and netdev doesn't look kindly on performance hits. They seem to be able to juggle that balance better than filesystems...
Right now, we're in a similar situation with the firewall: we have a bunch of CVEs coming out from nftables where a root user (in a namespace of course) can use vulnerabilities to escalate privileges outside the namespace. We treat those as serious bugs (AKA security issues outside of the kernel) and fix them.
Maybe it's time to treat filesystems the same way?
Am I wrong to assume the issues around hostile filesystems could be abused to escape user sandboxes the same way recent vulnerabilities around nftables did?
Posted Aug 8, 2023 14:58 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (3 responses)
While with the network the system has NO control over what is received, and cannot trust ANYTHING.
I'm far more concerned about the effort filesystem devs seem to put in to protecting the filesystem, compared to the lack of effort they put into protecting the contents.
Cheers,
Posted Aug 8, 2023 15:07 UTC (Tue)
by anarcat (subscriber, #66354)
[Link]
I understand if we'd make this argument about RAM or CPU, you need to draw that line somewhere. I think the point I'm trying to make is the line isn't drawn in the right place for filesystem images. Those used to be tied to (hard) disks, but those days are long gone...
Posted Aug 8, 2023 19:10 UTC (Tue)
by pizza (subscriber, #46)
[Link] (1 responses)
If someone injects something into the network stream, with valid checksums and metadata, then the user/application *will* get attacker-supplied data that can trigger all sorts of secondary problems. The relative ease of which attackers can inject this stuff into the network means that applications have to add more layers of security (eg TLS or application-specific stuff) to be resilient in the face of intentional attacks.
This is the sort of filesystem attack we're talking about here -- intentionally mangled metadata but with the checksums fixed up so that from the FS's perspective, it's legitimate. How is the filesystem supposed to know that the data/metadata it's reading is sane when the only mechanisms it has for such things have been intentionally subverted? The consider that this mangled metadata might only be illegitimate in combination with another portion of filesystem that might never get examined.
Short of effectively running a full fsck scrub at mount time (and failing to mount until all errors are fixed offline) I don't see a way to ensure the overall filesystem is in a 100% internally consistent/sane state at mount time. This won't protect you against the on-disk metadata from getting changed out from underneath you either, so you'd end up needing to maintain all metadata in memory, and only doing writes on updates.
Posted Aug 8, 2023 19:49 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
Or you need the effect of corrupt filesystems to be limited to that instance of the filesystem, which is a more manageable, but still hard, problem.
If the worst case impact of mounting a malformed filesystem image is that you can read parts of the image that you should not have been able to read as a normal user (e.g. there's a file whose content is the superblock, or a file that lets you read part of the partition that's not in use by the filesystem's metadata), then there's no security issue; the attacker who can damage the filesystem in ways that expose other parts of the device also has access to the full device without damaging the filesystem.
The problem here, however, is that a damaged filesystem might tickle a bug that gives access to anything the kernel has access to, not just to the partition, loopback image, or raw device that the filesystem is stored on. It's not great if a filesystem bug can be used to change the IP stack's state such that an attacker controlled remote device can get root on your system.
Posted Aug 10, 2023 16:00 UTC (Thu)
by DemiMarie (subscriber, #164188)
[Link] (1 responses)
Posted Aug 10, 2023 16:20 UTC (Thu)
by anarcat (subscriber, #66354)
[Link]
I'm not talking about Ted Ts'o or any specific kernel maintainer to actively start doing this, I know everyone is busy. What I would like is some openness to consider those things a real problem, and I think there's a blockage there. Ts'o, in particular, has been pretty vocal about this being unfixable and not part of the threat model, which I find frustrating.
Hell, if anything Google and Samsung would need this to keep hackers from doing jailbreaks on their phones eventually, won't they? :p
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
1. The hardware path may be lieing. [A USB cable may have a cpu which injects commands into the pathway of the drive and computer.]
2. The initial probing of the drive requires some level of trust of the device that says it actually is what it is.
3. Checksums can be and are tampered with (supposedly syzkiller does this a lot).
4. Encrypting things might help, but not against 'Joe told me to plug his USB key in to get the files.' You might trust Joe, you might not trust his 16 year old son who goes by L33t0ne online these days... who totally didn't play with Joe's computer equipment last night.
5. Filesystems are hard enough with all the ways they can get corrupted and you have to somehow accept and 'deal' with. You can make things 'secure' by just saying 'nope your file is broken and I can't really trust the rest of the disk anymore so broken, buy another'. Users hate that and so you start saying 'ok I can try and get around this problem and maybe correct things...' which is great for users and great for exploits.
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
FFI makes it possible to rewrite the critical part of the application only. No need to change the world in one step. Not even need to change the world at all.
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
The word can be understood by people who don't know anything about electric stuff.
It's commonly been known as that brick you put in between things so that the two things can work with each other. That's very similar to what FFI does. Which makes it a good metaphor, IMO.
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
To avoid confusion, this comment was a question, not a statement.
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
Do all block-device-based filesystems need to be rewritten?
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
It’s actually worse than just removable media. Under Unmaintained filesystems as a threat vector
kernel_lockdown(7)
, the threats to the kernel include almost everything. They include USB devices. They include removable filesystems. They include mkfs
and fsck
, and therefore include local filesystems! They even include PCI devices attached via Thunderbolt, and therefore most device drivers. I would not be surprised if the overall attack surface exceeds even that of web browsers, and browsers were at least written with security as a goal.
Unmaintained filesystems as a threat vector
Wol
Unmaintained filesystems as a threat vector
Perhaps they could be moved/sandboxed using the "User mode blobs" infrastructure?
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
That would be absolutely awesome, especially if it could be compiled to WebAssembly for fine-grained sandboxing. I’m not aware of a single exploit against a WebAssembly implementation that does not rely on a miscompilation, and miscompilations require the WebAssembly module itself to be malicious, as opposed to being exploited during runtime.
Linux Kernel Library
Linux Kernel Library
Linux Kernel Library
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
Wol
Unmaintained filesystems as a threat vector
The difference is that (in most cases) the system is in control of what gets written to disk
You can drive a bus through that "most cases". Security is not about "most cases", it's exactly about those corner cases that allows attackers to do whatever they want with a system. When the kernel mounts a filesystem, it just doesn't know what's written on there, and does *not* have control over it, by definition. It's a read operation. It has control over writes, and even then, there's a potentially hostile stack of microcontrollers underneath there.
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector
It is time to treat filesystems the same way, but it is unreasonable to expect the (already overworked) filesystem maintainers to fix the stream of vulnerabilities syzbot (and other fuzzers) are finding. Google, Oracle, Red Hat, and other companies need to hire people specifically to fix these vulnerabilities and backport the fixes.
Unmaintained filesystems as a threat vector
Unmaintained filesystems as a threat vector