A story of three kernel vulnerabilities
Software developers vary greatly in their ability to respond and patch zero-day vulnerabilities. In this study, the Linux platform had the worst response time, with almost three years on average from initial vulnerability to patch." Whether or not one is happy with how security updates work with Linux, three years sounds like a rather longer response time than most of us normally expect. Your editor decided to examine the situation by focusing on two vulnerabilities that are said to be included in the Trustwave report and one that is not.
Three years?
As of this writing, Trustwave's full report is not available, so a detailed look at its claims is not possible. But, according to this ZDNet article, the average response time was calculated from these two "zero-day" vulnerabilities:
- CVE-2009-4307: a divide-by-zero crash
in the ext4 filesystem code. Causing this oops requires convincing
the user to mount a specially-crafted ext4 filesystem image.
- CVE-2009-4020: a buffer overflow in the HFS+ filesystem exploitable, once again, by convincing a user to mount a specially-crafted filesystem image on the target system.
The ext4 problem was reported on October 1, 2009 by R.N. Sastry, who had been doing some filesystem fuzz testing. The report included the filesystem image that triggered the bug — that is the "exploit code" that Trustwave used to call this bug a zero-day vulnerability. Since the problem was limited to a kernel oops, and since it required the victim's cooperation (in the form of mounting the attacker's filesystem) to trigger, the ext4 developers did not feel the need to drop everything and fix it immediately; Ted Ts'o committed a fix toward the end of November. SUSE was the first distributor to issue an update containing the fix; that happened on January 17, 2010. Red Hat did not put out an update until the end of March — nearly five months after the problem was disclosed — and Mandriva waited until February of 2011.
One might argue that things happened slowly, even for an extremely low-priority bug, but where does "three years" come from? It turns out that the fix did not work properly on the x86 architecture; Xi Wang reported the problem's continued existence on December 26, 2011, and sent a proper fix on January 9, 2012. A new CVE number (CVE-2012-2100) was assigned for the problem and the fix was promptly committed into the mainline. Distributors were a bit slow to catch up, though; Debian issued an update in March, Ubuntu in May, and Red Hat waited until mid-November — nearly eleven months after disclosure — to ship the fix to its users. The elapsed time from the initial disclosure until Red Hat's shipping an update that fixes the problem properly is, indeed, just over three years.
The story for the HFS/HFS+ vulnerability is similar. An initial patch fixing a buffer overflow in the HFS filesystem was posted by Amerigo Wang at the beginning of December, 2009. The fix was committed by Linus on December 15, and distributor updates began with Red Hat's on January 19, 2010. Some distributors were rather slower, but it was another hard-to-exploit bug that was deemed to have a low priority.
The problem is that the kernel supports another (newer) filesystem called HFS+. It is a separate filesystem implementation, but it contains a fair amount of code that was cut-and-pasted from the original HFS implementation, much like ext4 started with a copy of the ext3 code. The danger of this type of code duplication is well known: developers will fix a bug in one copy but not realize that the same issue may be present in the other copy as well. Naturally enough, that was the case here; the HFS+ filesystem had the same buffer overflow vulnerability, but nobody thought to do anything about it until Timo Warns quietly told a few kernel developers about it at the end of April 2012. Greg Kroah-Hartman committed a fix on May 4, and the problem was publicly disclosed a few days after that. Once again, a new CVE number (CVE-2012-2319) was assigned, and, once again, distributors dawdled with the fixes; openSUSE sent an update in June, while Red Hat waited until October, five months after the problem became known. The time period from the initial disclosure of the HFS vulnerability until Red Hat's update for the HFS+ problem was just short of three years.
One could look at this situation two ways. On one hand, Trustwave has clearly chosen its vulnerabilities carefully, then applied an interpretation that yielded the longest delay possible. Neither story above describes a zero-day vulnerability knowingly left open for three years; for most of that time, it was assumed that the problems had been fixed. That is doubly true for the HFS+ filesystem, for which the vulnerability was not even disclosed until May, 2012. Given the nature of the vulnerabilities, it is highly unlikely that the black hats were jealously guarding them in the meantime; the odds are good that no system has ever been compromised by exploiting either one of them. Trustwave's claims, if they are indeed built on these two vulnerabilities, are dubious and exaggerated at best.
On the other hand, even low-priority vulnerabilities requiring the victim's cooperation should be fixed — and fixed properly — in a timely manner, and it is not at all clear that happened with these problems. The response to the ext4 problem was arguably fast enough given the nature of the problem, but the fact that the problem persisted on the obscure x86 architecture suggests that the testing applied to that fix was, at best, incomplete. In the HFS/HFS+ case, one could argue that somebody should have thought to check for copies of the bug elsewhere. The fact that the HFS and HFS+ filesystems are nearly unused and nearly unmaintained did not help in this case, but attackers do not restrict themselves to well-maintained code. And, for both bugs, distributors took their time to get the fixes out to their users. We can do better than that.
Meanwhile, in 2013
Perhaps the slowness observed above is the natural response to vulnerabilities that nobody is actually all that worried about. Had they been something more serious, it could be argued, the response would have been better. As it happens, there is an open issue at the time of this writing that can be examined to see how well we do respond; the answer is a bit discouraging.
On January 20, a discussion on the private kernel security list went public with this patch posting by Oleg Nesterov. It seems that the Linux implementation of the ptrace() system call contains a race condition: a traced process's registers can be changed in a way that causes the kernel to restore that process's stack contents to an arbitrary location. The end result is the ability to run arbitrary code in kernel mode. It is a local attack, in that the attacker needs to be able to run an exploit program on the target system. But, given the ability to run such a program, the attacker can obtain full root privileges. That is the kind of vulnerability that needs quick attention; it puts every system out there at the mercy of any untrusted users that may have accounts there — or at the mercy of any attacker that may be able to compromise a network service to run an arbitrary program.
On February 15, the vulnerability was disclosed as such, complete with handy exploit code for those who do not wish to write their own. Most victims are unlikely to apply the kernel patch included with the exploit that makes the race condition easier to hit; the exploit also needs the ability to run a process with real-time priority to win the race more reliably. But, even without the patch or real-time scheduling, a sufficiently patient attacker should be able to time things right eventually. Solar Designer reacted to the disclosure this way:
Arguably this should not be a zero-day vulnerability: the public discussion of the fix is nearly one month old, and the private discussion had been going on for some time before. But, as of this writing, no distributors have issued updates for this problem. That leads to some obvious questions; quoting Solar Designer again:
One assumes that such a statement will be forthcoming in the near future. In the meantime, users and system administrators worldwide need to be worried about whether their systems are vulnerable and who might be exploiting the problem.
Once again, we can do better than that. This bug was known to be a serious
vulnerability from the outset; one of the developers who reported it
(Salman Qazi, of Google) also provided the exploit code to show how severe
the situation was. Distributors knew about the problem and had time to
respond to it — but that response did not happen in a timely manner. The
ptrace() problem will certainly be
straightened out in less than three years, but that still may not be a
reason for pride. Users should not be left wondering what the situation is
(at least) one month after distributors know about a serious vulnerability.
| Index entries for this article | |
|---|---|
| Kernel | Security/Vulnerabilities |
| Security | Bug reporting |
| Security | Linux kernel |
Posted Feb 19, 2013 18:19 UTC (Tue)
by joey (guest, #328)
[Link] (13 responses)
For example, at LCA2013, the swag bag contained a small USB key with a penguin logo. I'm amoung the probably majority of attendees who plugged that key into a laptop without disabling the default automounting. That could have easily been a mass exploit vector to access development machines for many Linux and free software developers and perhaps a LWN editor too. ;)
AFAIK it was not, nor was the PDF file on the drive that some attendees also opened.. But all that is needed to do such a mass exploit is an inexpensive hardware order and a bit of social engineering... and a "low priority" kernel security hole.
Posted Feb 19, 2013 19:32 UTC (Tue)
by ms-tg (subscriber, #89231)
[Link] (12 responses)
Posted Feb 20, 2013 7:12 UTC (Wed)
by smurf (subscriber, #17840)
[Link]
Posted Feb 20, 2013 7:17 UTC (Wed)
by error27 (subscriber, #8346)
[Link] (10 responses)
But otherwise yes, the fuzzer uses loop back filesystems for testing. The thing about USB sticks is that most distros automount them when you plug them in.
Probably they should not automount less used filesystems.
Posted Feb 20, 2013 9:54 UTC (Wed)
by josh (subscriber, #17465)
[Link] (9 responses)
Posted Feb 20, 2013 12:48 UTC (Wed)
by robert_s (subscriber, #42402)
[Link] (8 responses)
In an ideal world of course, it would be possible to run all filesystem drivers as FUSE modules or in kernel.
Posted Feb 20, 2013 13:39 UTC (Wed)
by robert_s (subscriber, #42402)
[Link] (7 responses)
Replying to myself - upon reading to the bottom of these comments it seems libguestfs can do this to some extent.
Perhaps a security-conscious distribution should consider doing auto-mounting of any "removable" block devices through such a mechanism.
Posted Feb 20, 2013 14:31 UTC (Wed)
by drag (guest, #31333)
[Link] (6 responses)
FUSE still goes to through kernel file system interface, and then you have all the file system code, and the setuid fuse binaries and special permissions that the user has to have to access /dev/fuse.
It seems to me to be a attempt to throw code and complexity to obsofgate (sp?) a potential security hole. It just seems to be a better to approach just to fix the code.
Also I am pretty sure that if somebody plugs a device into a machine they have the full intention of mounting it to see what is on it. Having a 'ack' button may be useful in a case where you do not want a device mounted while you are away from the computer and the screen is locked, but besides that having a extra step the user must go through to mount it would serve little purpose. It may make people feel more comfortable or help people (like me) that tend to do odd things with flash file systems that precludes mounting them.
This is the case were potentially some sort of 'anti-virus' code may be useful to validate the device before mounting it, but that seems to open up a whole new can of worms.
Posted Feb 20, 2013 14:48 UTC (Wed)
by robert_s (subscriber, #42402)
[Link] (3 responses)
Well you'd better tell the authors of libguestfs then (largely RedHat) as security seems to be its main intention.
If you're saying that an exploit granting access to a user space program is just as dangerous as it having access to kernel space, I think most people would disagree with you.
The point is not whether or not the user wants to mount the device - let's take it for granted that they do, so confirmation is irrelevant. It's whether that USB stick that was just handed to them at a conference is able to directly exploit their kernel on insertion through a specially crafted filesystem.
"Just fix"ing "the code" in this case means "always getting all filesystem code 100% right 100% of the time".
Posted Feb 20, 2013 16:01 UTC (Wed)
by drag (guest, #31333)
[Link] (2 responses)
No.
I am saying that taking a security problem that exists in kernel space and then trying to fix it by moving to a mixture of kernel space and userspace and throwing in a couple setuid root binaries isn't a silver bullet.
Fuse requires kernel file system features as well as setuid root binaries to operate properly. Without granting users access to /dev/fuse you can't 'mount' fuse file systems. Just granting users the ability to use fuse is a security risk in itself.
Now if you were to say that you wanted to use something like GVFS, which itself doesn't require any special privileges or fuse mounts or anything like that, then that's different. That is completely in a user account, but it's not POSIX compatible and requires programs to be GVFS aware.
Posted Feb 20, 2013 16:07 UTC (Wed)
by drag (guest, #31333)
[Link] (1 responses)
Posted Feb 21, 2013 19:40 UTC (Thu)
by alonz (subscriber, #815)
[Link]
So I, for one, really don't get your point.
Posted Feb 20, 2013 20:33 UTC (Wed)
by josh (subscriber, #17465)
[Link]
Posted Feb 28, 2013 21:20 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Assuming the automount works even if the screen is locked (as I get the impression is often the case), this is a perfect way of breaking into someone else's machine. If the exploit opens a root shell on a secret port, that machine is now owned ...
So in that case, the user knows exactly what is on it. They want to see what's on the machine.
So a confirmatory pop-up (as I get on my gentoo system) *is* a very effective security step.
Cheers,
Posted Feb 19, 2013 19:45 UTC (Tue)
by spender (guest, #23067)
[Link] (17 responses)
The PTRACE_SETREGS race vulnerability in various incarnations goes back to at least 2.4, so at least 12 years of vulnerability (both on x86 and x86_64 BTW). FWIW, given the characteristics of the vulnerability, the constraints on it, and the extensive cleanup required to not bring the system down with it, it's unlikely to be exploitable on a grsecurity system with KERNEXEC/UDEREF enabled. If it were possible, a large infoleak of kernel .text would be needed (which we've hopefully eradicated via USERCOPY) and an additional infoleak or reliable address with which to store a ROP payload.
BTW I released the ARM blog I had mentioned earlier, for those who are interested:
-Brad
Posted Feb 19, 2013 21:47 UTC (Tue)
by Trou.fr (subscriber, #26289)
[Link] (14 responses)
However, the handling of the ptrace vuln is very representative of the state of security in the Linux world.
Nobody cares about real security. The only progress that has been made in actual security in a _mainline_ distro was in Ubuntu with the work of Kees Cook. Distros don't care about security, Linus doesn't care either so we're stuck with a platform with very little progresss in 10 years.
The support for signed kernel module is quite representative too : it's been implemented because of UEFI, 10 years too late (in Linus' words).
Seeing the awesome work in grsecurity and PaX being ignored is depressing. The discussion about the inclusion of grsecurity in Debian is quite revealing : http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=605090 :( It leads to fragmentation : people with security needs manage and maintain their own grsecurity kernel and just don't even try to push it upstream because of the refusals they will get...
Microsoft, which was despised for its horrible security 10 years ago has made such progress that Linux is considerably behind now. I just hope we'll be able to catch up.
Posted Feb 19, 2013 22:05 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
Posted Feb 19, 2013 22:59 UTC (Tue)
by spender (guest, #23067)
[Link] (1 responses)
-Brad
Posted Feb 19, 2013 23:37 UTC (Tue)
by mjg59 (subscriber, #23239)
[Link]
Signed module support in RHEL was never about security, it was about supportability. If customers are willing to use MSR hacks to load unsigned modules they're also going to be willing to just modify their bug reports to remove the tainted flags, so making it foolproof was never a great concern.
Posted Feb 20, 2013 10:30 UTC (Wed)
by renox (guest, #23785)
[Link] (4 responses)
"could have been"? What about the HFS+ exploit?
Posted Feb 20, 2013 13:52 UTC (Wed)
by Trou.fr (subscriber, #26289)
[Link] (3 responses)
The HFS+ vuln is not exploitable in that case. While it can be used for "physical" attacks like the USB key, it is not usable remotely.
_Thousands_ of servers have been compromised with that scenario :
Posted Feb 20, 2013 16:24 UTC (Wed)
by bfields (subscriber, #19510)
[Link] (2 responses)
If people don't exchange data on usb keys as much as they used to on floppies, perhaps that wouldn't be as effective these days.
Posted Feb 20, 2013 23:59 UTC (Wed)
by andrel (guest, #5166)
[Link] (1 responses)
Posted Feb 21, 2013 11:55 UTC (Thu)
by Trou.fr (subscriber, #26289)
[Link]
As for floppies, viruses spread mostly by running infected executables, not using vulns.
Posted Feb 20, 2013 20:44 UTC (Wed)
by corsac (subscriber, #49696)
[Link] (4 responses)
Posted Feb 20, 2013 21:23 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
Posted Feb 21, 2013 3:16 UTC (Thu)
by draco (subscriber, #1792)
[Link] (1 responses)
If the kernel must parse ASN.1/X.509 to parse the signature for authentication...yikes, but that's not a requirement. (And even if they are, I hope it's a really limited implementation.)
Posted Feb 21, 2013 6:16 UTC (Thu)
by corsac (subscriber, #49696)
[Link]
Posted Feb 21, 2013 19:13 UTC (Thu)
by zlynx (guest, #2285)
[Link]
Posted Mar 1, 2013 17:56 UTC (Fri)
by lopgok (guest, #43164)
[Link]
It is in the mainline kernel.
Posted Feb 20, 2013 9:17 UTC (Wed)
by epa (subscriber, #39769)
[Link]
Posted Feb 22, 2013 5:41 UTC (Fri)
by jtc (guest, #6246)
[Link]
Not only is their analysis biased, but, if the zdnet summary of their report is to be believed, they've shown themselves to be incompetent:
"Zero-day flaws — software vulnerabilities for which no patch is available — in the Linux kernel that were patched last year took an average of 857 days to be closed, Trustwave found. In comparison zero-day flaws in current Windows OSes patched last year were fixed in 375 days."
The obvious implication is that this is a claim that the average of time to close for all zero-day defects in the Linux kernel is 3 years (versus 375 days for Windows). Obviously, an average cannot be calculated from 2 instances, which are very likely worst-case, out of many critical defects. Such miscalculation, of course, implies incompetence (or the zdnet summary is inaccurate). The criticism that these 2 cases took too long to fix is, perhaps, warranted, but nobody paying attention will conclude from their report that the implication of the headline ("Linux trailed Windows in patching zero-days in 2012...") is anything other than bullshit.
Interestingly, at the end of the zdnet article is:
"The Trustwave report says the number of critical vulnerabilities, as determined by the Common Vulnerability Scoring System (CVSS) assessment of factors like potential impact and exploitability, identified in the Linux kernel was lower than in Windows last year, with nine in Linux compared to 34 in Windows. The overall seriousness of vulnerabilities was also lower in Linux than Windows, with Linux having an average CVSS score of 7.68 for its vulnerabilities, compared to 8.41 for Microsoft."
This might be viewed as evidence that Trustwave is not biased, but, unfortunately for them in light of their main (apparent) claim, not as evidence that they are not incompetent.
Posted Feb 19, 2013 19:48 UTC (Tue)
by dlang (guest, #313)
[Link]
They are supposed to be more reliable because they have better testing, but that testing takes time, and no distro ships the latest upstream kernel, so every distro has the added delay that they need to
1. notice that a change needs to be backported to their private kernel (I'm sure the usual suspects will again blast the kernel developers for not labeling every patch with it's security implication so that people could only look at 'security' patches, but that's a very old debate)
2. backport the change (figuring out if the patch has other implications due to other, unrelated changes that have taken place in the meantime)
3. test the 'new' kernel
4. ship the 'new' kernel to users.
All of this takes a long time, a few months of delay is actually surprisingly good (although 11-13 months seems to be a bit on the long side)
Posted Feb 19, 2013 20:28 UTC (Tue)
by hibiscus (guest, #86633)
[Link] (5 responses)
The race is hard to win in this case. And as you can see, the PoC requires a kernel patch to work reliably.
Posted Feb 19, 2013 21:00 UTC (Tue)
by drag (guest, #31333)
[Link] (4 responses)
How many times can a script kiddie try the exploit in a minute? In a hour? In a day? I don't know the details on this exploit, but I expect the answers to any of those questions should range from the thousands to the tens of thousand attempts.
How many times does it have to work? The answer, of course, is 'once'. So if the exploit is as little as 0.0001% reliable I bet it can can lead to a rooted computer 100% of the time given the right circumstances.
Posted Feb 19, 2013 21:30 UTC (Tue)
by hibiscus (guest, #86633)
[Link] (3 responses)
Posted Feb 20, 2013 4:09 UTC (Wed)
by rahvin (guest, #16953)
[Link] (2 responses)
Posted Feb 21, 2013 15:03 UTC (Thu)
by alankila (guest, #47141)
[Link] (1 responses)
In any case this sort of probabilities require means to fire the attack several times per second or it will probably take years of continuous attempting before succeeding. Unfortunately ptrace sounds like the sort of thing you can try thousands of times per second.
Posted Feb 21, 2013 16:20 UTC (Thu)
by drag (guest, #31333)
[Link]
Posted Feb 20, 2013 10:42 UTC (Wed)
by rwmj (subscriber, #5474)
[Link] (2 responses)
For example OpenStack out of the box will mount untrusted guest filesystems on the host kernel, so all you need to do is upload a malicious filesystem image to a public cloud in order to attack the host and any other VMs running on the same system.
We (Red Hat) have worked to mitigate this by using libguestfs which adds several layers of protection between a malicious filesystem and the host:
http://libguestfs.org/guestfs.3.html#security-of-mounting...
But with sysadmins still using kpartx / loopback mounting, there's still a need to take fs vulnerabilities much more seriously.
Posted Feb 21, 2013 1:06 UTC (Thu)
by dgc (subscriber, #6611)
[Link] (1 responses)
Vendors take them extremely seriously, but there's lots more to the process than "OMG!!! Security Problem! World ends at 5pm if we don't have a fix by then!". As a filesystem developer (who co-incidentally works for Red Hat, too) I have fixed my fair share of fsfuzz related bug reports over the years.
So, what's the real issue here? It's that most fuzzer "filesystem vulnerabilities" are either a simple denial-of-service (non-exploitable kernel crash), or are only possible to exploit when you *already have root* or someone does something stupid from a security perspective. However, once a problem is reported to the security process it is captured, and the security process takes over everything regardless of whether subsequent domain-expert analysis shows that the bug is security problem or not.
> For example OpenStack out of the box will mount untrusted guest
This is a prime example of "doing something stupid from a security perspective". Virtualisation is irrelevant here - the openstack application is doing the equivalent of picking up a USB stick in the car park and plugging it into a machine on a secured network.....
However, to really understand the situation from an fs developer POV you need to understand a bit of history and a bit about risk. That is, any change to filesystem format validation routines has risk of causing corruption or false detection of corruption, and hence you can seriously screw over the entire filesystem's user base with a bad fix.
Think about it for a moment - a divide by zero crash on a specifically corrupted filesystem is simply not something occurs in production environments. However, the changes to the code that detects and avoids the problem is executed repeatedly by every single ext4 installation in existence. IOWs, the number of people that may be affected by the corrupted filesystem div0 problem is *exceedingly tiny*, while the number of people that could be affected by a bad fix is, well, the entire world.
Then consider that the CVE process puts pressure on the developers to fix the problem *right now* regardless of any other factors. Hence the fixes tend to rushed, not well thought out, are only lightly tested and not particularly well reviewed. In the filesystems game, than means the risk of regressions or the fix not working entirely as intended is significant.
In the past this risk was ignored for security fixes, and that's why we have a long history of needing to add more fixes to previous security fixes. We have proven that the risk of regressions from rushed fixes is real and it cannot be ignored. Hence -in this arena- the CVE process could be considered more harmful to users than leaving the problem unfixed while we take the usual slower, more considered release process. i.e. the CVE process (and measuring vendor performance with CVE open/close metrics) simply does not take into account that fixing bugs badly can be far worse for users than taking extra time to fix the bug properly.
Vendors that do due diligence (i.e. risk assessment of such bugs outside of the security process) are more likely to correctly classify fuzz-based filesystem bugs compared to the security process. Hence we see vendors mitigating the risk of regressions by testing the filesystem fixes fully before releasing them rather than rushing out a fix just to close a CVE quickly.
IOWs, -more often than not- vendors are doing exactly the right thing by their user base with respect to filesystem vulnerabilities. The vendors should be congratulated for improving on a process that had been proven to give sub-standard results, not flamed for it...
-Dave.
Posted Feb 21, 2013 1:55 UTC (Thu)
by PaXTeam (guest, #24616)
[Link]
Posted Feb 22, 2013 10:36 UTC (Fri)
by ortalo (guest, #4654)
[Link]
Just my 2/5500 cents...
[1] BTW, I have a graph of that data at http://rodolphe.ortalo.free.fr/COURS_SE_2012_r3.pdf, page 15, but everyone can grab it from cve.mitre.org
Posted Feb 22, 2013 23:13 UTC (Fri)
by jmorris42 (guest, #2203)
[Link] (7 responses)
That file still exists of course, and the mount command will still honor it when issued from a command line; but it is ignored by graphical desktops. And this defect is undocumented and if filed as a bug would be instantly closed as NOTABUG.
For example the machine I'm typing on dual boots Win7 and has an NTFS filesystem for it. Despite efforts to suppress it, it shows an icon on my desktop and if I right click it the desktop environment happily offers to mount it and it will succeed. Meanwhile /etc/filesystems is still the stock one supplied by Fedora. It lists vfat, hfs and hfsplus (why) but does not mention ntfs.
In a sane world a Linux desktop would not automatically mount rare filesystems, better still it would honor /etc/filesystems so the user could control it. Just how many users need hfs support? On a removable device? Close enough to zero it should default to no. These days ext[234],vfat,ntfs,iso9660 and udf probably should default to supported with everything else off.
Posted Feb 23, 2013 12:39 UTC (Sat)
by cortana (subscriber, #24596)
[Link] (3 responses)
As for /etc/filesystems and /proc/filesystems, these days mount itself only seems to consult them if '-t auto' is used (or '-t' is absent entirely) and if libblkid fails to identify the correct filesystem. So I get the feeling that /etc/filesystems is really a remnant of an obsolete feature that hasn't been used since kernel module autoloading went in.
Posted Mar 2, 2013 16:59 UTC (Sat)
by jmorris42 (guest, #2203)
[Link] (2 responses)
But the key point remains, after several replies nobody can point to a way to actually solve a problem that exists on all graphical desktops.
udev is clearly not intended to be modified by the end user. It isn't documented, the files controlling it are written in a way to be hostile to manual editing and the entire subsystem has been churning for years.
Simply stopping the modules from loading isn't a good solution either.
You can't even reliably suppress the icons from appearing on a desktop. I once found a way to do it, it worked until the next Fedora.
Posted Mar 3, 2013 15:42 UTC (Sun)
by cortana (subscriber, #24596)
[Link] (1 responses)
Posted Mar 4, 2013 15:27 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Feb 24, 2013 8:49 UTC (Sun)
by paulj (subscriber, #341)
[Link] (1 responses)
Arg!
Posted Mar 12, 2013 3:59 UTC (Tue)
by Duncan (guest, #6647)
[Link]
My gentoo/kde systems are build without udisks, policykit, etc support, the appropriate USE flags turned off, both due to the heavy dependencies (udisks-1 wanted lvm2, udisks2 wants gparted while I use gptfdisk, I need those installed like I need another hole in my head!). And the kernel is built for the specific system it's on, monolithic, module support turned off. (Tho I did have to package.provided a couple runtime deps, including kdesu, that I didn't need anyway. I could of course have edited and overlaid the ebuilds to kill the runtime deps, but that would have been a repeated edit over many updates. Package.provideing them only need be done once.)
So no automounting or GUI superuser access and for SURE no support for obscure filesystems!
Where specific privlege-required functions are to be used by the GUI user, I configure sudoers to allow the specific command, no more, no less, with or without password required, depending on the need and how locked down the command actually is. Yes, that does require that the user use the commandline for it, but IMO, if a user isn't comfortable using the commandline, they have no business running superuser/privileged commands in the first place.
Of course that's a bit drastic for many, but that's precisely the point, gentoo, being build from source by the user, allows turning off unneeded features at end-user-controlled build-time, as opposed to centralized distro decided "someone might use it so we better enable it" defaults, at /their/ buildtime. If you want automount, turn on the appropriate USE flags, else turn them off and don't even have the otherwise required components installed in the first place. Actually, it's more than that, in effect, over time gentoo STRONGLY ENCOURAGES observance of the security "only install what you actually use" rule, because otherwise you're repeatedly building updates for stuff you don't use anyway, so if you're not actually using it, it quickly becomes simpler to just turn it off and not worry about building it any more.
So yes, there's a "reasonably obvious" way to turn them off... switch to a distro (and desktop, if necessary, but I'd guess gnome on gentoo allows turning it off too, I just don't know for sure as I don't use it) that allows it, if yours doesn't. =:^)
Duncan
Posted Feb 24, 2013 15:40 UTC (Sun)
by spender (guest, #23067)
[Link]
Grsecurity will also prevent mount from being able to load arbitrary kernel modules (it will be restricted to modules that register a filesystem).
This is a subset of the full GRKERNSEC_MODHARDEN feature which prevents unprivileged users from being able to auto-load kernel modules, without having to implement a posteriori blacklists.
-Brad
Posted Feb 28, 2013 6:40 UTC (Thu)
by geek (guest, #45074)
[Link]
Dave
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
mount: only root can do that
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
The only setuid binary involved with using FUSE is "fusermount", which only opens /dev/fuse and immediately drops privilege. The filesystem handler itself runs as an unprivileged user.
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
they have the full intention of mounting it to see what is on it
Wol
A story of three kernel vulnerabilities
https://forums.grsecurity.net/viewtopic.php?f=7&t=3292
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
As joey remarked above, it is a real issue..
By focusing on the ext4 DOS, you "forget" the other issue.
A story of three kernel vulnerabilities
1) outdated CMS with remote code execution (mostly PHP)
2) easy execution of any executable
3) ready to use exploit that works reliably as unprivileged user
1) vulnerable webapp
2) escalation to root using kernel vulnerability (or poor sysadmin)
3) ssh backdoor to collect passwords
4) compromise other hosts, goto 3
5) use compromised servers as DDoS platforms, proxy, whatever...
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
It is enabled by default in RHEL and fedora and perhaps elsewhere.
I heard it is even in the newest Android builds.
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
A story of three kernel vulnerabilities
Filesystem vulnerabilities
Filesystem vulnerabilities
> (including Red Hat who I work for).
> filesystems on the host kernel,
Filesystem vulnerabilities
A story of three kernel vulnerabilities
I think it's FUD. Admittedly that's an uninformed comment because I am so convinced of that, that I do not even take the time to read the reports in question anymore...
But I'd like to outline something factual: I see 2 CVE ids here from 2009.
In 2009 only, there were over 5500 CVE ids. The evolution of the number of CVE entries since 2000 is, in my opinion, a much more interesting topic [1].
Now my question for Trustwave: who funded that research?
Can't disable unused filesystems
Can't disable unused filesystems
Can't disable unused filesystems
Can't disable unused filesystems
Can't disable unused filesystems
Can't disable unused filesystems
Sure CAN disable unused filesystems =:^)
Can't disable unused filesystems
A story of three kernel vulnerabilities
Isn't that, in principle, a testable assumption? I'd be interested to know if there is a testing discipline around such an assumption, I suppose it isn't the only time this has occurred.
