|
|
Log in / Subscribe / Register

Linux 2.6.30 exploit posted

Linux 2.6.30 exploit posted

Posted Jul 19, 2009 5:34 UTC (Sun) by mingo (subscriber, #31122)
In reply to: Linux 2.6.30 exploit posted by drag
Parent article: Linux 2.6.30 exploit posted

So he deserves not only to be thanked for finding the bug, but he should be thanked for being honest about it. There is money to be made in 0-day exploits and he could of profited from it financially before going public, if he ever decided to go public.

The timing suggests that he noticed the fix to the NULL dereference, not the bug. He could have found the original bug in Febrary and could have gone public about it but he (like others who reviewed that code) didn't notice the (obvious in hindsight) bug.

What he did was to demonstrate that a thought-to-be-unexploitable NULL dereference kernel crash can be exploited due to a cascade of failures in other components: compiler [to a lesser degree] and SELinux [to a larger degree].

This was useful, as .31 could have been released with the fix but there was no -stable back-port tag for the fix to make it into .30. Also, perhaps more importantly in terms of practical impact, the two cascading failures in SELinux and GCC were also worth fixing and could avoid (or at least reduce the probability of) future exploits.

(This still leaves open the theoretical possibility of him having known about the original networking bug (introduced in February) and having exploited it - only going public with the exploit once the NULL dereference fix went upstream. I don't think this happened.)

So the disclosure was useful, but, to be fair to the original poster, also not fully friendly. It maximized his own gain out of it, regardless of the consequences. Posting a zero-day exploit in the middle of the summer holiday season might be seen reckless and irresponsible by someone who happened to be on vacation in that time-frame.

Regarding the suggestion of a personality disorder by the original poster, the observation sounds plausible - but indeed irrelevant as you point out. The field of finding exploits is unforgiving: you spend days, weeks, months and years reading the worst possible code people can throw out, with just a few instances of something real big being found.

In that time you don't actually modify the code you read in any public way, you don't interact with people and you don't socialize with those developers. You don't even relate to the code in any personal way - you try to find certain patterns of badness. While developers have happy users (we hope ;-) exploit finders have few if any positive feedback.

This, almost by definition, distorts the personality and creates a false sense of superiority: if only I were allowed to hack this code, I'd clearly do such a better job. And they call this Linus a genius while he allows such obvious crap. Morons!.

So yes, the somewhat childish attitude and messaging, the hatred, the self-promoting PR, the exaggeration, the sense of superiority and the narcissism are all pretty normal in that field of activity. Compounded with some inevitable level of paranoia most likely as well, and perhaps, if there's weak morals, there might also be the constant financial lure of the criminal side mixed with the fear of not risking to go too far to become a felon.

Plus such patterns draw external attacks (mixed with the emotional, defensive attitude from developers when one out of ten thousand commits per kernel cycle turns out to be seriously buggy - bringing the worst behavior out of them: initially ridiculing or downplaying the exploit writer) which creates a self-reinforcing cycle of violence that deforms the psyche.

Without sounding patronizing, IMHO those are forces strong enough to bend steel, let alone the human psyche. I think that such expoit-finding work should be done in an organized, in (perhaps government) sponsored setups, with proper safeguards and humane work conditions. It's useful to society at large and it's a petty that it's currently done in such an unstructured, random way, burning through and bending good and smart people fast.


to post comments

Linux 2.6.30 exploit posted

Posted Jul 19, 2009 8:02 UTC (Sun) by dlang (guest, #313) [Link]

actually, I believe that the fix for this will be in 2.6.30.2, which is due to be released any time now (potentially over the weekend). the preliminary patches were released on thursday or friday, moments before Greg K-H left on a trip, with the expectation being that if no problems were found with them the -stable release would happen in a couple of days.

Linux 2.6.30 exploit posted

Posted Jul 19, 2009 9:08 UTC (Sun) by MisterIO (guest, #36192) [Link]

I think that if you standardize too much this kind of work, you're gonna strip most of the fun that people who search bugs feel doing it. I don't see any problem if they exaggerate, or if they're narcisists or childish. It's actually funnier if it's like this. And after all, by your reasoning, how would you classify some of Torvalds' incinerating posts? I find them funnier and more entertaining to read than their hypothetical thechnically equivalent but more formal replacement.

Linux 2.6.30 exploit posted

Posted Jul 19, 2009 13:54 UTC (Sun) by spender (guest, #23067) [Link] (5 responses)

If you can't laugh at yourself, others will do it for you.

A static checker could have found the bug. What makes you think a blackhat can't find bugs when they're introduced? They want to actually compromise machines that have unfixed vulnerabilities, you know. And they don't post their findings online, especially in such a nice presentation as mine.

The fact that in my free time I was able to spend 5 minutes and figure out the particular bugfix that was ignored for its security implications was in fact exploitable isn't really relevant here. I'm not the person you need to worry about (unless all you really do worry about is embarrassing public disclosure of embarrassing vulnerabilities like the SELinux one).

BTW, as that hugely embarrassing SELinux vulnerability is currently being brushed under the carpet as an "errata" I've gone ahead and submitted a CVE request for it myself. The previous do_brk() bypass of mmap_min_addr received a CVE in 2007, this case should be no different. An advisory will follow.

While I'm here, just a side-note on why I won't ever be cooperating in ways you might prefer in the future:

On June 2nd, I sent a mail in private to Jakub Jelink, discussing some problems with FORTIFY_SOURCE I encountered when evaluating its usefulness for the kernel (by doing the actual porting work, marking allocators with the appropriate attributes, and implementing a few other tricks of my own) and found it to be very poor, only 40% coverage in the kernel, basically missing everything but the most trivial of cases that didn't need protection in the first place. Specifically one of the things I mentioned was that FORTIFY_SOURCE wasn't able to determine the size of arrays within structures, and given how widely structures are used in the kernel, having proper bounds checking on their elements is pretty important (quoted from his reply):
> I have a structure in grsecurity, struct gr_arg. It looks like:
>
> +struct gr_arg {
> + struct user_acl_role_db role_db;
> + unsigned char pw[GR_PW_LEN];
> + unsigned char salt[GR_SALT_LEN];
> + unsigned char sum[GR_SHA_LEN];
> + unsigned char sp_role[GR_SPROLE_LEN];
> + struct sprole_pw *sprole_pws;
> + dev_t segv_device;
> + ino_t segv_inode;
> + uid_t segv_uid;
> + __u16 num_sprole_pws;
> + __u16 mode;
> +};
>
> I have a function, called chkpw, its declaration looks like:
> int chkpw(struct gr_arg *entry, unsigned char *salt, unsigned char *sum);
>
> within that function, I do the following:
>
> memset(entry->pw, 0, GR_PW_LEN);
>
> If I put a __builtin_object_size(entry->pw, 0/1) check above that, it's
> always -1.

Here's his reply from Jun 3rd:

"The above description is useless, you need to provide complete (though
preferrably minimal) self-contained preprocessed testcase.
I'm not going to second guess what your code looks like."

Apparently my description was so useless that the next day, on Jun 4th, what gets submitted to gcc?
http://gcc.gnu.org/ml/gcc-patches/2009-06/msg00419.html

No credit, no reply of thanks. This combined with the attempted cover-up of the SELinux vulnerability means I'll be going back to selling vulnerabilities in any Red Hat-related technologies (exec-shield, SELinux, etc) as I used to in the past. $1000 for an exec-shield vulnerability from back in 2003 I think? (I can't seem to find the picture I took of the check with "exec-shield" in the memo line ;)) which is still not fully fixed today. Maybe it was from 2004, judging by this post where I mention doing so: http://lwn.net/Articles/112880/ It was to a legitimate purchaser, who (unfortunately for you I guess) doesn't have a policy of notifying the vendor.

PS: I don't need a lecture on ego or feeling like I can do things better than everyone else, from the very faszkalap kernel hacker who is hated among everyone for those very things.

-Brad

Linux 2.6.30 exploit posted

Posted Jul 19, 2009 17:53 UTC (Sun) by nix (subscriber, #2304) [Link]

This combined with the attempted cover-up of the SELinux vulnerability means I'll be going back to selling vulnerabilities in any Red Hat-related technologies
I take back everything I just said about your not doing things like selling vulnerabilities, then.

(You really do only care about getting your name in lights, don't you? Actual system security obviously comes second or you wouldn't even consider selling vulns.)

Linux 2.6.30 exploit posted

Posted Jul 19, 2009 19:26 UTC (Sun) by vonbrand (subscriber, #4458) [Link] (1 responses)

Sorry, but I have to agree that a code snippet with a rather vage description is next to useless. And the commit you have issues with could very well be "independent invention" (or, for terminally paranoids, somebody took your snippet and made it into a complete example).

So you found a collection of bugs that in total turn out to be an serious, exploitable vulnerability. Commendments, more power to you! That some pieces (which by themselves alone aren't exploitable) aren't taken too seriously was to be expected, given the above. No "sweeping under the rug" here.

Please consider that there are tens of thousands of changesets flowing into the kernel each release cycle. If a few turn out to have exploitable bugs, it is a huge success ratio. Sure, this is sadly not enough.

Also, not everybody finding and fixing a problem is able to (or even interested in) finding out if the bug was a security problem, and even much less in developing exploit code. That very few bug fixes are labeled "Security risk" is to be expected, no dark coverup to be suspected here.

Linux 2.6.30 exploit posted

Posted Jul 30, 2009 13:57 UTC (Thu) by lysse (guest, #3190) [Link]

> somebody took your snippet and made it into a complete example

Or someone did what I've done myself in the past - tersely pointed out "useless bug report is useless", but then thought "oh, but hang on, what if there *is* a problem there?" and gone digging around themselves until they realised what the issue was and fixed it.

There's always another option, and there's always another way it could have happened.

Linux 2.6.30 exploit posted

Posted Jul 20, 2009 9:50 UTC (Mon) by makomk (guest, #51493) [Link] (1 responses)

The SELinux mmap_min_addr bypass vulnerability... isn't one, exactly. It's
documented behaviour of mmap_min_addr that if you're using SELinux,
mmap_min_addr has no effect and SELinux controls the minimum address.
(It's not documented in Documentation/sysctl/vm.txt though by the looks of
it. Fail.)

Now, Red Hat should set it for robustness reasons, but if they don't it's
not Linux's fault exactly.

Linux 2.6.30 exploit posted

Posted Jul 20, 2009 12:40 UTC (Mon) by spender (guest, #23067) [Link]

Where's this documented behavior you talk about? Here's the documentation for it straight from the configuration help:

config SECURITY_DEFAULT_MMAP_MIN_ADDR
int "Low address space to protect from user allocation"
depends on SECURITY
default 0
help
This is the portion of low virtual memory which should be protected
from userspace allocation. Keeping a user from writing to low pages
can help reduce the impact of kernel NULL pointer bugs.

For most ia64, ppc64 and x86 users with lots of address space
a value of 65536 is reasonable and should cause no problems.
On arm and other archs it should not be higher than 32768.
Programs which use vm86 functionality would either need additional
permissions from either the LSM or the capabilities module or have
this protection disabled.

This value can be changed after boot using the
/proc/sys/vm/mmap_min_addr tunable.

Distros bother to set the /proc/sys/vm/mmap_min_addr. It mattered before when mmap_min_addr was bypassed via do_brk(). It matters now that everyone by default can bypass mmap_min_addr simply from having SELinux enabled.

-Brad

Linux 2.6.30 exploit posted

Posted Jul 22, 2009 10:00 UTC (Wed) by ortalo (guest, #4654) [Link] (1 responses)

Hey... I have rarely read such an interesting comment, especially in association with a real security failure.

First, thanks for the nice wrap up.
Second, as I am involved in security-related teaching activities, would you eventually allow me to present your text to my students for commenting?

Finally, let me express some additional concerns.
Governement-funded or organized vulnerability research may actually already be occuring but not leading to security improvements: think of military-funded organizations or simple selfish (and commercially-compatible) self-protection of big players. I wonder how we could guarantee that such organizations do contribute to overall security. But I totally agree with you that such organized research is still too rare; hence we still rely a lot too much on individual achievements in this area.
Then, there is a deeper question: don't we feel the need for technical vulnerability research because we do not put enough efforts on providing security guarantees (or mechanisms, or properties) in our systems? (And yes, I know I speak to an audience who already certainly does much more than any other one in this area - I would probably not even openly express this concern if I did not know that.)

Linux 2.6.30 exploit posted

Posted Aug 2, 2009 15:30 UTC (Sun) by mingo (subscriber, #31122) [Link]

Second, as I am involved in security-related teaching activities, would you eventually allow me to present your text to my students for commenting?

Sure, feel free!

Finally, let me express some additional concerns. Governement-funded or organized vulnerability research may actually already be occuring but not leading to security improvements: think of military-funded organizations or simple selfish (and commercially-compatible) self-protection of big players. I wonder how we could guarantee that such organizations do contribute to overall security. But I totally agree with you that such organized research is still too rare; hence we still rely a lot too much on individual achievements in this area. Then, there is a deeper question: don't we feel the need for technical vulnerability research because we do not put enough efforts on providing security guarantees (or mechanisms, or properties) in our systems? (And yes, I know I speak to an audience who already certainly does much more than any other one in this area - I would probably not even openly express this concern if I did not know that.)

How much effort we put into various fields is largely supply-demand driven.

Firstly, the main drive in the 'fix space' is towards problems that affect people directly.

A bug that crashes people's boxes will get prime-time attention. A missing feature that keeps people from utilizing their hardware or apps optimally too gets a fair shot and all the market forces work on them in a healthy way.

'Security issues' is not included in that 'direct space' - the ordinary user is rarely affected by security problems in a negative way.

So computer security has become a field that is largely fear-driven: with a lot of artificial fear-mongering going on, with frequent innuendo, snake-oil merchants and all the other parasitic tactics that can get the (undeserved) attention (and resources) of people who are not affected by those issues.

I think it's difficult to see where the right balance is, given how hard it is to measure the security of a given system.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds