LWN.net Logo

Kernel security, year to date

By Jonathan Corbet
September 9, 2008
Earlier this year, your editor asked a high-profile kernel developer, in a public discussion at a conference, about the seemingly large number of kernel-related security bugs. Was the number of these vulnerabilities of concern, and what was being done about it? The answer that came back was that security issues aren't a huge concern, that most of the reported issues were obscure local exploits requiring the presence of specific hardware. Serious issues, like the vmsplice() vulnerability, are rare.

More recently, as part of the panic associated with getting a talk together for the Linux Plumbers Conference, your editor decided to take a closer look at kernel vulnerabilities. It turns out that there are, in fact, quite a few of them. The vulnerabilities which have been given CVE numbers in 2008 (so far) are:

CVE Subsystem Vuln type Notes
CVE-2008-0001 VFS privilege File access mode bypass
CVE-2008-0007 drivers privilege Missing fault() boundary checks
CVE-2008-0009 core info disclosure vmsplice() #1
CVE-2008-0010 core info disclosure vmsplice() #2
CVE-2008-0352 net remote DOS IPv6 packet handling crash
CVE-2008-0598 x86 info disclosure 32-bit emulation exposes memory
CVE-2008-0600 net privilege The big vmsplice() hole
CVE-2008-0731 AppArmor privilege SUSE AppArmor vulnerability
CVE-2008-1294 core resource RLIMIT_CPU limit bypass
CVE-2008-1367 x86 privilege? GCC 4.3.0 and DF
CVE-2008-1375 VFS privilege dnotify race
CVE-2008-1514 s380 DOS ptrace() crash
CVE-2008-1615 x86 DOS ptrace() crash
CVE-2008-1619 Xen DOS Xen crash (Red Hat kernels)
CVE-2008-1669 VFS privilege fcntl() race
CVE-2008-1673 net remote privilege ASN.1 buffer overflow
CVE-2008-1675 net privilege Tehuti driver overflow
CVE-2008-2136 net remote DOS IPv6 SIT tunnel memory leak
CVE-2008-2137 sparc DOS mmap() panic
CVE-2008-2148 VFS DOS utimensat() missed permission check
CVE-2008-2358 net remote DOS DCCP integer overflow
CVE-2008-2365 core DOS Red Hat utrace race
CVE-2008-2372 net DOS mmap() resource use
CVE-2008-2729 x86 info disclosure x86_64 copy_*_user() error
CVE-2008-2750 net remote privilege PPPOL2TP overflow
CVE-2008-2812 TTY drivers privilege NULL pointer dereference
CVE-2008-2826 net DOS SCTP memory use
CVE-2008-2931 VFS privilege unprivileged mount point changes
CVE-2008-2944 core DOS Red Hat utrace double free
CVE-2008-3077 x86 privilege x86_64 ptrace() crash and use-after-free
CVE-2008-3247 x86 privilege x86_64 LDT setup error
CVE-2008-3272 sound info disclosure OSS unverified device number
CVE-2008-3275 VFS DOS Dentry cache memory use (needs UBIFS for exploit)
CVE-2008-3276 net remote DOS DCCP integer overflow
CVE-2008-3496 UVC drivers privilege UVC driver buffer overflow
CVE-2008-3525 net privilege Missing checks in sbni WAN driver
CVE-2008-3526 net remote privilege SCTP integer overflow
CVE-2008-3534 VFS DOS tmpfs crash
CVE-2008-3535 VFS DOS readv()/writev() off-by-one
CVE-2008-3686 net DOS IPv6 null pointer dereference
CVE-2008-3792 net remote DOS SCTP-AUTH crashes

That is 41 CVE numbers (so far) for 2008 - not a small number. Fully 1/3 of these vulnerabilities were in the networking subsystem, which is scary: this is the most likely place to find remotely-exploitable problems in the kernel. It is true that sites not running SCTP or DCCP can forget about many of those, and IPv6 is responsible for a few of the rest, so most of those vulnerabilities were not a concern for most sites.

Many of the remaining vulnerabilities were in the core kernel or in architecture-specific code. The number of vulnerabilities found in drivers - the part of the kernel which has long been sneered at as containing the worst code - is actually quite small. On the other hand, four of the CVE-listed vulnerabilities (the Xen, AppArmor, and utrace problems) were caused by out-of-tree code added by distributors. There is no way to know how many vulnerabilities were fixed without obtaining a CVE number - or without even realizing that a vulnerability existed in the first place.

When a single program is responsible for this many vulnerabilities, it makes sense to ask why. The kernel, of course, is a very large program; more code means more bugs, some of which will have security implications. Beyond that, though, the kernel runs in a special, privileged environment. Flaws which would simply be fixed as just-another-crash in a normal application are denial-of-service vulnerabilities in the kernel - or worse. So a larger number of vulnerabilities in the kernel does not, by itself, imply that the kernel's code is worse than that of other programs; it only reflects the fact that the consequences of kernel bugs tend to be more severe.

The discovery (and repair) of vulnerabilities does not necessarily imply that our current process is creating a lot of vulnerabilities; it could be that we are mostly fixing older problems. If the developers are fixing vulnerabilities more quickly than they are adding more, life should be good in the long run. The vulnerabilities in the list above vary from those which are very old (affecting 2.4 kernels too) to some which are very new (the UVC driver was added in 2.6.26). Some of them are in code which, while being intended for the mainline, has not yet been merged. It is probably impossible to say whether security problems are being fixed more quickly than they are being created, but one thing is clear: all of that code flowing into the mainline is bringing a certain number of security problems with it.

For that reason, it is a little discouraging that there is little work being done in the kernel community with the explicit goal of improving the security of the kernel. Few patches are reviewed with security issues in mind; the vmsplice() vulnerability, as one example, was a clear failure of the review process. There are undoubtedly many people who are doing fuzz testing and such - some of them are even the good guys - but much of the formal testing going on seems aimed more at API conformance than at security verification. There must be more work going on behind the scenes, but it is still hard to avoid a sense of a certain amount of complacency with regard to security issues.

As a community, we take pride in the security of our system. But one vulnerability per week is not the most inspiring security record. It would be good to find a way to do better than that. Better tools must be a part of the solution, but more thorough code review is also needed. There still is no substitute for a pair of eyeballs looking for ways in which new code might be subverted. Asking for more security-oriented review seems ambitious when code review is already one of the biggest bottlenecks in the development process. But the alternative would appear to be to continue to add to our collection of CVE numbers.


(Log in to post comments)

Kernel security, year to date

Posted Sep 9, 2008 20:01 UTC (Tue) by spender (subscriber, #23067) [Link]

This illustrates part of the problem with the kernel developers covering up security issues. You can't even illustrate how pathetic the current situation is because of the fact that found+fixed vulnerabilities aren't being acknowledged (either through CVEs or other means). The latest "stable" kernel release is filled with fixes for vulnerabilities (some of which have CVEs assigned, but it's been decided that that information won't be included in the stable release changelogs).

The classifications for these vulnerabilities are also wildly inaccurate.
Take for example:
CVE-2008-2365 core DOS Red Hat utrace race
If you go and look at the bugzilla entry for it:
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2008-2365
Sitting right there in the OOPS report is everything you need to know: RIP = 0, hence trivial arbitrary code execution from Linux 2.6.9 to 2.6.25, and not a "DoS".

Same goes for:
CVE-2008-3686 net DOS IPv6 null pointer dereference
which btw is still not properly fixed (the exploitable null dereference just moved to a less obvious spot under a specific configuration)

It's very obvious that this problem is due to the development model they've decided to adopt. Despite its advantages in getting new features out to users more quickly, it has some serious disadvantages: particularly in stability and security. They reaffirm their decision to use this model by pretending the disadvantages don't exist, by trivializing the horrible security of these kernels.

And it's only going to get worse, primarily because of your collective apathy. You gauge your success by how many "vmsplice" vulnerabilities have been found -- and I've got to tell you, that's not any kind of anomaly when it comes to the actual security of the kernel. It's more surprising that it was made public. Read the actual exploit where it alludes to how long it had been kept private.

You've enabled them to not report security issues. You've made your bed and you'll have to lie in it.

-Brad

Kernel security, year to date

Posted Sep 9, 2008 21:13 UTC (Tue) by branden (subscriber, #7029) [Link]

Is that the the rhetorical "you" you're referring to?

Kernel security, year to date

Posted Sep 9, 2008 21:36 UTC (Tue) by drag (subscriber, #31333) [Link]

Ya. It's a bit strange. It's like he is blaming me for the security problems or something.

> You've enabled them to not report security issues. You've made your bed and you'll have to lie in it.

You (spender) sound like you have somebody in your head that your arguing with that has nothing to do with the editor/writer of the article or anybody else that would happen to read your reply. Maybe you should spend a little less arguing with your imagination before writing a reply next time you have something you want to say.

;)

Kernel security, year to date

Posted Sep 9, 2008 22:00 UTC (Tue) by spender (subscriber, #23067) [Link]

You, as in, you Linux users (who read this site) who when this issue of kernel security was raised recently, said "you're right Linus, security bugs are no more important than normal bugs, you and the rest of the kernel developers don't need to waste any of your precious time on informing me of any of the exploitable vulnerabilities found in your software." You, as in, you Linux kernel developers (who read this site) who refuse to acknowledge these fundamental problems, and instead of taking steps to correct them, cover them up or ignore them.

As I don't remember you (as in you, drag) speaking up when this issue was last raised, then yes, your implicit acceptance is partially to blame for the new non-disclosure policy of the kernel developers. It would not have been possible for them to adopt this policy in the presence of general public outrage.

But of course, taking this discussion to a meta-level of what I mean by "you" is much more productive.

-Brad

Kernel security, year to date

Posted Sep 9, 2008 22:13 UTC (Tue) by drag (subscriber, #31333) [Link]

> You, as in, you Linux users (who read this site

Your talking to imaginary people. So stop it.

This is a obvious effort to continue some flame war with people you've forgotten the names of (or your trying not to be so obvious)) and your hoping that they'll pick up the bait and start arguing with you again.

You have important things to say (in fact, your essentially agreeing with the person that wrote the article, BTW), but your clouding your message with this silliness. Trying to pick fights is a waste of everybody's time and is putting you off-message. In other words; your trying to start a flame war. If your post lacked content then I would not have hesitated to label you a 'troll' and wouldn't be talking to you right now.

(I am assuming your trying to impress on everybody the need for higher coding review standards in the kernel, which is a worthy thing)

Kernel security, year to date

Posted Sep 9, 2008 22:58 UTC (Tue) by nix (subscriber, #2304) [Link]

Face it. You're not going to get 'general public outrage' about *any*
computer security problem unless it causes mass death, and even then it
might not happen. Security is a boring overhead to most people, so any
scheme which attempts to change anything in the security domain by
attempting to incur 'general public outrage' is guaranteed to fail.

(What's more, it's tiresome. Fixing the damn bugs is surely more
worthwhile than complaining endlessly about them.)

(Also: compared to a lot of code in critical positions, the security of
the kernel is pretty damn good. A while back I looked for security holes
in the product I work on in my day job, which throws many millions of
dollars around in the financial markets on a daily basis and is often
intentionally (!) left exposed to the Internet at large. I gave up when I
realised that the security hole density was approximately one per twenty
lines, generally enormous buffer overruns, trusting of untrustworthy data
from completely unauthenticated external sources, and SQL injection
attacks up the wazoo. I tried to convince my coworkers not to introduce
more such bugs, but nobody else considered any of these things
problematic. You can always trust external data, can't you? And if bad
stuff comes in, well, it's not *your* fault. Blame the attacker.)

Kernel security, year to date

Posted Sep 9, 2008 23:49 UTC (Tue) by drag (subscriber, #31333) [Link]

If you want to apply your evilness towards a creative goal check out Linux-hater's blog.
http://linuxhaters.blogspot.com/

If you compile a list of vulnerabilities and memorial quotes from Kernel developers regarding security then that is very very likely to attract attention.

Kernel security, year to date

Posted Sep 9, 2008 21:40 UTC (Tue) by bfields (subscriber, #19510) [Link]

RIP = 0, hence trivial arbitrary code execution

How does that work? (Just curious.)

It's very obvious that this problem is due to the development model they've decided to adopt. Despite its advantages in getting new features out to users more quickly, it has some serious disadvantages: particularly in stability and security. They reaffirm their decision to use this model by pretending the disadvantages don't exist, by trivializing the horrible security of these kernels.

How is that obvious? (How do you know that less bugs would be generated with a different model?)

Kernel security, year to date

Posted Sep 9, 2008 21:53 UTC (Tue) by spender (subscriber, #23067) [Link]

Read: http://seclists.org/dailydave/2007/q1/0224.html

As for why the development model being a large reason for the problem, the easiest comparison (if we cover our eyes and assume the numerous vulnerabilities I've mentioned on this site and elsewhere for which there is no CVE don't exist, like the SELinux remote DoS), is to compare the numbers of CVEs for 2.4 against those for 2.6 for this year:

For 2.6 we have 41
For 2.4 we have 11 (based on my count from changelogs, feel free to double-check it)

-Brad

Kernel security, year to date

Posted Sep 9, 2008 22:17 UTC (Tue) by bfields (subscriber, #19510) [Link]

Read: http://seclists.org/dailydave/2007/q1/0224.html

Following that to the tarball, to the included README....

So the answer is just "mmap buffer at address 0", then trigger the bug that results in calling a function at 0 (or some small offset from that). OK.

(Stupid question: are those low addresses always available? I would've thought you'd want to treat the first and last page of the address space specially, exactly to increase the chances of catching such a typical mistake.)

Kernel security, year to date

Posted Sep 9, 2008 22:23 UTC (Tue) by paulj (subscriber, #341) [Link]

Nah, RIP==0 implies the IP got corrupted (e.g. through stack overflow), which almost certainly means it can be manipulated..

Kernel security, year to date

Posted Sep 9, 2008 22:45 UTC (Tue) by spender (subscriber, #23067) [Link]

Not necessarily: all the world isn't a stack overflow (they're actually
quite rare in the kernel). In this case it's much more likely that it was
just jumping/calling through a null function pointer -- no tricks needed,
just straightforward arbitrary code execution.

-Brad

Kernel security, year to date

Posted Sep 9, 2008 23:02 UTC (Tue) by nix (subscriber, #2304) [Link]

Jumping through a null fp could have been a simple bug. There's no
requirement for the content of the pointer to be manipulable by external
sources. (Of course it's quite possible, and even the nice case would
still make it a DoS attack, and speaks of a missed case in testing, at
least: but the kernel has so many configuration options, runs on such a
variety of hardware, and makes such heavy use of function pointers that
complete coverage of these situations is never going to happen. Alas.

It *is* interesting that most of the holes aren't in old crufty driver
code: I suppose this is because that code doesn't change much, and also
doesn't get reviewed much because the security impact of a hole in the
sbpcd driver isn't exactly huge :) )

Kernel security, year to date

Posted Sep 9, 2008 23:21 UTC (Tue) by jreiser (subscriber, #11027) [Link]

the kernel ... makes such heavy use of function pointers that complete coverage of these situations is never going to happen. Why not? Build an option to gcc that tests for zero at every invocation of a non-lexical function, then make that option the default when compiling for Linux kernel. Or, have every interrupt and syscall map a replacement if page 0 already has a user-level mapping.

Kernel security, year to date

Posted Sep 10, 2008 8:57 UTC (Wed) by nix (subscriber, #2304) [Link]

Both of these options would be really really -really- expensive (especially the latter: page table manipulation is expensive). The first one has more promise: GCC already has *some* code to insert automatic tests against NULL, it just isn't hooked up in the right places

Kernel security, year to date

Posted Sep 10, 2008 4:46 UTC (Wed) by paulj (subscriber, #341) [Link]

I very deliberately used "e.g." rather than "i.e." wrt "stack overflow" (I've little idea about how exploits usually work). ;) How does this null function pointer attack work?

Kernel security, year to date

Posted Sep 10, 2008 15:22 UTC (Wed) by spender (subscriber, #23067) [Link]

Though I specifically noted the stack overflow case, I was rejecting the general notion that the null ptr dereference was due to some attacker-influenced manipulation or control of kernel data.

These bugs generally crop up from having incomplete handling of all possible cases of working with a pointer. The most fruitful ones are the bugs where it involves using a null pointer to a structure containing a function pointer, or simply a null function pointer itself. The kernel has a ton of the first case: it's the way in which abstractions are made. These grant trivial arbitrary code execution on x86 (just map your code at address 0 and it gets executed in kernel context), whereas in other cases these bugs can be used to provide the kernel with trojaned data -- though the usefulness of this for DoS or privilege elevation has to be determined on a case-by-case basis.

-Brad

Kernel security, year to date

Posted Sep 11, 2008 10:01 UTC (Thu) by nix (subscriber, #2304) [Link]

It seems like we should add CAP_LOW_MAP_FIXED to me, set off for virtually everything other than Wine (and the X server, and perhaps a few Lisp interpreters?), and deny MAP_FIXED mmap()s in the low megabyte or so of the address space to processes without that capability. It's not as though most programs would *want* to torpedo their own ability to segfault on null pointer dereferences!

Kernel security, year to date

Posted Sep 11, 2008 14:14 UTC (Thu) by paulj (subscriber, #341) [Link]

Huh.. wow. I thought the kernel took care of setting RO mappings for page 0 (and more - the low "ASCII range" of addresses?) to ensure faults, as a security measure (for userspace primarily).

If not, wow. Bring back the seperate user/kernel address space..

Kernel security, year to date

Posted Sep 9, 2008 22:31 UTC (Tue) by spender (subscriber, #23067) [Link]

The bugclass is larger than just allocating at 0, it involves all invalid
userland dereferences. So for example, there have been bugs where a
pointer was used which had a magic poison value in it, and this poison
value resulted in an address which was located somewhere in the middle of
the userland address space.

Openwall implemented HARDENED_PAGE0, and a derivative of it has been
implemented in the mainline Linux kernel (after looking at the Openwall
code and fixing the following trivial bypass in the original version
http://www.frsirt.com/english/advisories/2007/4200, which sat around in
their codebase for 6 months). These solutions only
protect against the 0+small offset variety of the bug, and obviously only
for people who actually have it set to a meaningful value, and it also
isn't enabled for all applications (it can break wine, for example). PaX
has UDEREF which prevents exploitation of the entire class of bugs, not
just the 0+small offset in protected apps.

-Brad

Kernel security, year to date

Posted Sep 9, 2008 22:28 UTC (Tue) by bfields (subscriber, #19510) [Link]

As for why the development model being a large reason for the problem, the easiest comparison (if we cover our eyes and assume the numerous vulnerabilities I've mentioned on this site and elsewhere for which there is no CVE don't exist, like the SELinux remote DoS), is to compare the numbers of CVEs for 2.4 against those for 2.6 for this year:

Yeah, unfortunately I think you'd have trouble convincing anyone that "number of CVE's" was a very useful statistic. (Unfortunate because it *would* be useful to be able to make those kinds of comparisons. I don't know what would be better. You could do audits of random samples of the code bases in question, but that sounds expensive. Statistics from the static analyzers and such might be better than nothing.)

Kernel security, year to date

Posted Sep 9, 2008 22:42 UTC (Tue) by spender (subscriber, #23067) [Link]

Well, unfortunately, it's the same metric being used to gauge the quality
of the Linux kernel's security in the very same article here that you're
commenting on. I completely agree that it's highly flawed (as I'd for
instance put the actual number of vulnerabilities fixed this year at around
80-100. For every vulnerability that gets public recognition, there is at
least one other than does not).

But at the same time, if we take into account the idea of silently fixed
vulnerabilities, there are *far* fewer bugfixes made to the 2.4 tree for
these to hide in compared to the 2.6 tree. It's not unreasonable at all I
think to say that with 40mb of code changes per stable release, it's not
exactly possible to maintain a secure codebase.

You could also look at how many of the vulnerabilities affected 2.6 only --
nearly all of the 2.4 vulnerabilities were present in 2.6 as well.
In 2.6, there have been many serious vulnerabilities recently but they
won't get much public attention because they only affect a small number of
recent kernels (the kernel developers fixing their recently introduced bugs
basically).

-Brad

Kernel security, year to date

Posted Sep 9, 2008 23:04 UTC (Tue) by nix (subscriber, #2304) [Link]

I'm not sure static analysis statistics are very useful, because bugs that
can be found by static analysis will have *been* found to some degree, and
thus preferentially fixed.

The interesting set is that which no static analysis tool can yet detect.
Unfortunately this is also the set that costs a bomb to locate.

Kernel security, year to date

Posted Sep 9, 2008 23:33 UTC (Tue) by njs (guest, #40338) [Link]

Agreed on CVEs being an imperfect measure, but they're presumably better than nothing, especially if all you want to do is identify trends.

But are you suggesting that 2.6 should switch to the 2.4 development model? Not the original 2.4 development model, the current one -- bug fixes only for years on end. Or... what?

I'm not trying to mock, it's a serious question -- you come in ranting plausibly about security issues being a problem, and how "we" should do something about that, but it's not at all clear to me what -- specifically -- would be better. (And it's a bit off putting that you seem to blame "us" for not doing... well, something...)

There are trade-offs. If a four-fold increase in security holes were really the price of 2.6's improvements compared to 2.4, that actually seems amazingly cheap -- though the real ratio is certainly much worse. How do we do better?

Kernel security, year to date

Posted Sep 10, 2008 0:11 UTC (Wed) by spender (subscriber, #23067) [Link]

Even for trends they're problematic: at most it gives you trends in what
gets publicly reported as security bugs; this is a level removed from the
kinds of things you would actually want to know.

My personal preference was for the odd/even development model. The current
model is unsustainable from a security perspective -- but there's no
interest in changing from that model, so suggesting a change of models
isn't useful. In other words, I realize the model won't change, so I'm
merely pointing out the problem with it and suggesting that at least
*something* be done about the security aspect. The current official view
is simply to "fix bugs" and not treat security bugs as special in any way.
With that kind of view, things will only decline.

We had previously extensively discussed changes that could be made to the
process so that at least security related bugs would be reported with more
accuracy and consistency, but instead it was decided to go backwards
instead of forwards. So my post wasn't really a request for change but
more of a "why are you surprised that it's so horrible?", since several
people have been pointing it out for years, and it continues to get worse.

At this point, I don't know what to tell you other than you should have
spoke up last month before they began their anti-security campaign. If you
want to find out what things you were vulnerable to, you'll have to
discover that information on your own by investigating each patch. If you
want to improve the security of the kernel in some way, you'll have to do
that yourself as well -- nobody seems to take it seriously.

Since you asked though about what can be done to improve kernel security,
one route is to remove the exploitability of certain bugclasses. The PaX
team recently has added a feature which prevents exploitation of refcount-
based bugs (like the ptrace ones listed in the CVEs in this article). So
there are always things that can be done with negligible impact, but don't
hold your breath waiting for it to come from the kernel developers
themselves.

-Brad

Kernel security, year to date

Posted Sep 11, 2008 9:58 UTC (Thu) by intgr (subscriber, #39733) [Link]

The rest of your post I can mostly agree with, but this statement of yours is completely backwards:

The PaX team recently has added a feature which prevents exploitation of refcount-based bugs (like the ptrace ones listed in the CVEs in this article). So there are always things that can be done with negligible impact, but don't hold your breath waiting for it to come from the kernel developers themselves.

If there are security enhancements that have (supposedly) negligible overhead, then why are not they being pushed towards the mainline where they can benefit everyone? Expecting kernel developers to go to the PaX team begging for their patch is not the way mainline development works. You go as far as to imply that the mainline kernel developers should duplicate this effort -- why?

Supposedly, getting patches into the kernel is easy with the current development model. Yes, they have different ideas about how security should be managed, but they are reasonable people. Why is the PaX team not interested in working with the mainline kernel?

Kernel security, year to date

Posted Sep 11, 2008 13:47 UTC (Thu) by PaXTeam (subscriber, #24616) [Link]

> If there are security enhancements that have (supposedly) negligible
> overhead, then why are not they being pushed towards the mainline where
> they can benefit everyone?

why does something have to be in mainline to benefit everyone?

> Expecting kernel developers to go to the PaX team begging for their
> patch is not the way mainline development works.

any source for this (rather silly, i might add) allegiation? last time i checked, the PaX patches were out there freely available to everyone.

> Supposedly, getting patches into the kernel is easy with the current
> development model. Yes, they have different ideas about how security
> should be managed, but they are reasonable people.

my experience says otherwise:

http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-07/m...
http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-07/m...
http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-07/m...

(unfortunately vger's spam filter ate my mails, so you'll have to piece the story together from the quoted parts). the funny thing is that since last year, when CFS (by Ingo, no less) went into the kernel, it too became victim of the user_mode() mess in that now if you run an endless loop in v86 mode, its runtime will be accounted as system time, not user time. to quote Linus: 'Sad.'

> Why is the PaX team not interested in working with the mainline kernel?

it's the other way around: i've been told explicitly and got it even codified in DCO that anonymous submissions are not to be. i prefer my status over silly policies.

Kernel security, year to date

Posted Sep 11, 2008 16:02 UTC (Thu) by pdundas (subscriber, #15203) [Link]

> why does something have to be in mainline to benefit everyone?

It doesn't have to be in mainline to benefit everyone.
But putting fixes in mainline DOES benefit everyone.
Putting them anywhere else only benefits a subset of users.

> > Expecting kernel developers to go to the PaX team
> > begging for their patch is not the way mainline
> > development works.

> any source for this (rather silly, i might add)
> allegiation? last time i checked, the PaX patches
> were out there freely available to everyone.

I think you're missing the point he was making.

If you want patches to go into mainline (and we just saw
that is A Good Thing (TM)), then Someone (TM) has to
submit that patch.

Possibly developing and testing their own code and
reviewing submissions keeps mainline kernel devs
quite busy. They may not have all that much time for
scouring the net for potentially useful patches.

Kernel security, year to date

Posted Sep 10, 2008 0:16 UTC (Wed) by clugstj (subscriber, #4020) [Link]

No, actually, the rate of CVEs creation could easily be worse than nothing. It could actually not be related to the actual number of bugs in the kernel. As long as there are enough bugs (which there are in any piece of complex software) to create CVEs, you could manipulate the CVE creation rate if you were so inclined.

Kernel security, year to date

Posted Sep 9, 2008 22:47 UTC (Tue) by nix (subscriber, #2304) [Link]

A development model in which nothing ever changed would introduce no new
bugs. The current development model has a high change rate.

The remainder follows by induction.

(Not that it's very *useful*, but when was the last time you heard of a
bug of any kind being introduced into TOPS-20, or DOS?)

Development model

Posted Sep 10, 2008 21:28 UTC (Wed) by man_ls (subscriber, #15091) [Link]

So true. And with the current rate of changes, any statistics are less than useful: the last development cycle may have produced some 170k new lines and maybe 20 vulnerabilities. That is one CVE per 8k lines of code, i.e. a needle in a haystack.

A really useful study on vulnerabilities would have to contemplate (as our editor suggests) the rate of changes, the rate of introduction and the rate of removal; and compare with other development models. Anything else is just anecdotal evidence.

Kernel security, year to date

Posted Sep 18, 2008 7:10 UTC (Thu) by adobriyan (guest, #30858) [Link]

> CVE-2008-2365 core DOS Red Hat utrace race
> If you go and look at the bugzilla entry for it:
> https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2008-2365
> Sitting right there in the OOPS report is everything you need to know:
> RIP = 0, hence trivial arbitrary code execution from Linux 2.6.9 to
> 2.6.25, and not a "DoS".

Sigh...

Those very attentive security researchers and accurate CVE database.

This particular bug was relevant to only kernels patched with utrace.

Only utrace kernels, nothing more.

Mainline was never affected.

A more useful metric

Posted Sep 9, 2008 21:42 UTC (Tue) by spender (subscriber, #23067) [Link]

It probably wouldn't raise much argument that over the past several years, the most serious and widely reported/recognized vulnerabilities have come out of isec.pl. A large reason for this is that they were one of the few groups that would actually publish exploit code. In a misguided view of security, many don't perceive there to be any "threat" unless a weaponized exploit is made public (clearly visible by continued mentions of the "vmsplice" boogey-man, for example).

So first some data (in chronological order):
http://isec.pl/vulnerabilities/0004.txt
http://isec.pl/vulnerabilities/isec-0012-do_brk.txt
http://isec.pl/vulnerabilities/isec-0012-mremap.txt
http://isec.pl/vulnerabilities/isec-0014-mremap-unmap.txt
http://isec.pl/vulnerabilities/isec-0015-msfilter.txt
http://isec.pl/vulnerabilities/isec-0016-procleaks.txt
http://isec.pl/vulnerabilities/isec-0017-binfmt_elf.txt
http://isec.pl/vulnerabilities/isec-0018-igmp.txt
http://isec.pl/vulnerabilities/isec-0019-scm.txt
http://isec.pl/vulnerabilities/isec-0022-pagefault.txt
http://isec.pl/vulnerabilities/isec-0023-coredump.txt
http://isec.pl/vulnerabilities/isec-0024-death-signal.txt
http://isec.pl/vulnerabilities/isec-0025-syscall-emulatio...
http://isec.pl/vulnerabilities/isec-0026-vmsplice_to_kern...

The last 3 and the first one are credited to Wojciech Purczynski (cliph@isec.pl), while the remaining 10 are credited to Paul Starzetz (ihaquer@isec.pl). The last 3 from cliph are during his employment at COSEINC.

Paul Starzetz had this to say about the Linux kernel (from http://searchenterpriselinux.techtarget.com/news/article/...):
"First the problem [with] Linux is that there are too many people 'hacking' the code. It has reached a complexity where the 'I-hack-quickly-some-code' approach doesn't work anymore."

and in reply to a security advisory dismissing one of his vulnerabilities as a "DoS" (from http://www.security-express.com/archives/bugtraq/2006-07/...):
"I really wonder why in the recent past there is a tendence to declare
such things as "denial of service" etc - while they are perfect root
backdoors / vulns

*B000M* you are in one minut^K^K^Ke later...

Maybe this is just to hide the overall bad quality of the 2.6 kernel
code? *just guessing*

Anyway CVE-2006-2451 is trivially exploitable so I don't attach any
exploit code since it is obvious..."

I should also mention that since October 15, 2007, Paul Starzetz has been employed by Immunity, who specifically practices non-disclosure. So if you're patting yourselves on the back because he hasn't made public any more serious exploits in the kernel, it has nothing to do with the quality of the code.

-Brad

A more useful metric

Posted Sep 9, 2008 21:47 UTC (Tue) by drag (subscriber, #31333) [Link]

> So if you're patting yourselves

Who are you talking to?

A more useful metric

Posted Sep 9, 2008 21:58 UTC (Tue) by sbishop (guest, #33061) [Link]

Those of us reading LWN...

So Spender isn't a writer; that's fine. But his comments provide enough content to justify content in return, I think.

A more useful metric

Posted Sep 9, 2008 23:13 UTC (Tue) by ballombe (subscriber, #9523) [Link]

That just a new fad on lwn comments: Meta-trolling.

Instead of trolling, accuse someone else of trolling! You get basically the same result but you have 'the moral high ground'(tm).

If you are lucky, people believe you were responding to a comment that have been deleted (!) and you can enjoy the ensuing confusion.

Maybe I am old fashioned, but I find trolls less painful than self righteous meta-trolls. (this post is intended as a meta-meta-troll).

A more useful metric

Posted Sep 9, 2008 23:50 UTC (Tue) by drag (subscriber, #31333) [Link]

Well thanks for the attention!

(weirdo)

A more useful metric

Posted Sep 10, 2008 0:15 UTC (Wed) by nix (subscriber, #2304) [Link]

I never meta-troll I didn't find annoying.

(sorry sorry)

A more useful metric

Posted Sep 10, 2008 5:01 UTC (Wed) by paulj (subscriber, #341) [Link]

He's assuming an audience for his comment, a sceptical or even hostile audience at that. It's a pretty obvious rhetorical device.

I despise myself slightly for adding to it, but do we really need to open a *second* sub-thread commenting on the style of spender's commenting? His posts are otherwise are very relevant and interesting, and the true audience here are surely intelligent enough to be able to sidestep the style issues...

A more useful metric

Posted Sep 10, 2008 6:31 UTC (Wed) by drag (subscriber, #31333) [Link]

Sure sure. All apologies.

A more useful metric

Posted Sep 10, 2008 9:05 UTC (Wed) by nix (subscriber, #2304) [Link]

His posts are very intelligent and interesting, but they lead to only two questions that I can see, one of which has been answered and the other of which may have no answer:

- can we fix the bugs? obviously yes, but then we get accused of cover-ups unless we raise the roof over every single bug: and see past posts from Al Viro about why it's ridiculously impractical to go via vendor-sec for every such bug: if major kernel developers refuse to use it because it's such an embargoing straitjacket, then castigating people because they don't use it is a waste of time.

- can we change things so that the bugs aren't introduced so fast? Maybe, but so far no method has been proposed that doesn't have the kernel developers recoiling in horror, and if a magic development method was known that avoided all security holes and didn't utterly devastate development rates, everyone would already be using it. (e.g. formal proving of absolutely everything from the spec on down reduces security holes, but is horrible for development. Multi-person review of every change, a-la OpenBSD, does the same, but a chronic lack of reviewers makes that hard: and areas like mm are sufficiently subtle that few people are qualified to be reviewers at all.)

Going back to even/odd doesn't strike me as being likely to happen, given that that system was imploding under the much lower change load that preceded git. A more rapidly alternating .26-is-stable .27-is-unstable scheme might work, but I'm not sure what that means other than giving a formal number to the post-merge-window -rc1 kernel, and I'm not sure how *that* would help. (It might encourage people to test the -rcs, I suppose!)

A more useful metric

Posted Sep 10, 2008 12:43 UTC (Wed) by jengelh (subscriber, #33263) [Link]

>"First the problem [with] Linux is that there are too many people 'hacking' the code. It has reached a complexity where the 'I-hack-quickly-some-code' approach doesn't work anymore."

Complexity... is only half of the story. LOC divided by complexity (unitless number, lower is better) is what you need to look at.

A more useful metric

Posted Sep 10, 2008 12:45 UTC (Wed) by jengelh (subscriber, #33263) [Link]

epic fail :)
Of course it should be complexity/LOC with lower-is-better.

A more useful metric

Posted Sep 10, 2008 18:32 UTC (Wed) by tzafrir (subscriber, #11501) [Link]

The data you present here suggests that the problem was roughly of the same severity last year.

There is also the issue of the larger rate of change. The fact that more code gets into mainline is a basic requirement. The tools available today and the processes used today allow this.

If you don't allow the code to get into mainline, people will use bad, unreviewed out-of-tree code. Not to mention all sorts of bugs that will be added by distributions when attempting to integrate all sorts of patches.

Kernel security, year to date

Posted Sep 10, 2008 2:10 UTC (Wed) by ebiederm (subscriber, #35028) [Link]

What kind of kernel bugs are not security vulnerabilities?

Kernel security, year to date

Posted Sep 10, 2008 2:26 UTC (Wed) by k8to (subscriber, #15413) [Link]

Nothing-happens bugs? Not all of them for sure, but for example:

Sound Card Driver X does not produce sound.

Is this a security bug? A security bug *could* exist that has this symptom, but the symptom doesn't necessitate one.

Kernel security, year to date

Posted Sep 10, 2008 3:00 UTC (Wed) by bfields (subscriber, #19510) [Link]

Sound Card Driver X does not produce sound.

I dunno, maybe someone has an alarm system that runs on Linux?

Any bug causes the system to behave in a way that a user (or application writer, or administrator) didn't anticipate. So for any bug you can probably contrive some situation where it could have security consequences.

I'm not convinced that completely makes the "security bug" category useless, but it's probably true that there's more of a continuum then we usually admit.

Kernel security, year to date

Posted Sep 10, 2008 3:30 UTC (Wed) by spender (subscriber, #23067) [Link]

It's very simple, Eric is just feigning ignorance here (or is seriously
deluded) so as to give credence to Linus' belief that security bugs are no
different or more important than regular bugs.

If you had read http://lkml.org/lkml/2008/7/17/94, you'd have seen the PaX
team describe security bugs as:
"anything that breaks the kernel's security model. privilege elevation
always does."

Sound not working is not a violation of the kernel's security model.

For another example, BUG()s which occur without any locks held that simply
cause nothing other than the process attempting an exploit to be terminated
are also not a violation of the kernel's security model. No privilege is
gained and the system remains fully available.

-Brad

Kernel security, year to date

Posted Sep 10, 2008 9:04 UTC (Wed) by bojan (subscriber, #14302) [Link]

> Linus' belief that security bugs are no different or more important than regular bugs

There is no doubt in my mind that Linus is a genius. But, that belief of his is borderline nonsense. It's like saying that you want your mechanic to assign the same importance to problems with your brakes and to discolouration of the upholstery.

Kernel security, year to date

Posted Sep 10, 2008 13:56 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]

as long as your mechanic fixes both, why would you care that they wereboth just single line-items in your bill?

Linus isn't saying that they are the same when deciding what to fix, he is saying that when the fixes are completed and available they should be treated the same, no matter if they are known to be security problems or not.

in part this is because many bug fixes end up fixing security problems that the author of the fix doesn't realize are there in the first place, so if you only apply fixes marked as 'security' you will not have a secure system.

other people think that someone should research all possible implications of every bugfix, and if it could be a security issue create a CVE number for it, and only after that submit the fix to be included.

personally I would rather see the person fix a couple more bugs than to have them take the time to jump through all of those hoops (never mind the fact that many fixes get tweaked after they are submitted, which would cause the need to go through all of that again)

Kernel security, year to date

Posted Sep 10, 2008 15:46 UTC (Wed) by PaXTeam (subscriber, #24616) [Link]

> Linus isn't saying that they are the same when deciding what to fix, he
> is saying that when the fixes are completed and available they should be
> treated the same, no matter if they are known to be security problems or
> not.

when a non-security bug is fixed (say, one resulting in file system corruption), the commit explains what the issue was. when a security bug is fixed, it doesn't say so. how's that equal treatment?

> in part this is because many bug fixes end up fixing security problems
> that the author of the fix doesn't realize are there in the first place,
> so if you only apply fixes marked as 'security' you will not have a
> secure system.

why would anyone apply only explicitly marked security fixes? maybe one just wants to use that info for prioritizing fixes for backporting. second, why are you suggesting that by applying not only explicitly marked security fixes one will have a 'secure system' (such a thing doesn't seem to exist)? it's not about absolutes, it's about shades of grey: by being able to prioritize known security fixes one will have a *more* secure system, simple as that.

> other people think that someone should research all possible implications
> of every bugfix, and if it could be a security issue create a CVE number
> for it, and only after that submit the fix to be included.

you're leaving out the most common and obvious case: when the developer already knows that a bug is security related. what does it cost then to mention it in the commit? not even a CVE is needed for that.

Kernel security, year to date

Posted Sep 10, 2008 19:52 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]

if you don't want to only apply security fixes to get a secure system, why do you need the big red flag that a patch is a security fix? if you are applying all bugfixes then you will get the security fixes along with everything else.

as for the reason to not call it out, to give the good guys a chance to apply fixes before the bad guys are writing exploit code.

if it's called out as specificly being a security fix then the bad guys can start work immediatly on producing an exploit, they don't have to examine the fix to see if it is a security fix or not. yes, this is a bit of security by obscurity, but obscurity by itself isn't a bad thing, it's only when you depend on it for your only defense that it becomes a disaster.

Kernel security, year to date

Posted Sep 10, 2008 20:26 UTC (Wed) by PaXTeam (subscriber, #24616) [Link]

> if you don't want to only apply security fixes to get a secure system,

what do non-security fixes have to do with the security of the system?

> why do you need the big red flag that a patch is a security fix?

why? it was in my post, read it again ;). keyword: priority. and if the security bug isn't marked as any kind of important fix whatsoever (as it is the modus operandi apparently) then how is anyone supposed to figure out that it needs to be backported? besides, what is this 'big red flag'? do you only think in extremes?

> if you are applying all bugfixes then you will get the security fixes
> along with everything else.

you're again thinking in black&white mode. what makes you think that people want all or nothing? what if sometimes one needs to prioritize and does risk evaluation before deciding on backporting and/or applying a fix?

> as for the reason to not call it out, to give the good guys a chance to
> apply fixes before the bad guys are writing exploit code.

this 'argument' has been thorougly debunked back in July. the short story is that guys capable of writing exploit code do not need such marks in commit messages because they will figure it out themselves, sometimes as soon as the commit containing the security bug goes in, way before any fix can possibly reach the good guys.

it's also funny that you're assuming that by not marking a commit as a security fix, the good guys will somehow be better off in identifying it as such.

Kernel security, year to date

Posted Sep 11, 2008 1:37 UTC (Thu) by eteo (guest, #36711) [Link]

> if you don't want to only apply security fixes to get a secure system,
> why do you need the big red flag that a patch is a security fix? if you
> are applying all bugfixes then you will get the security fixes along with
> everything else.

You wouldn't want to introduce possible instability to the system by applying other non-security fixes and enhancements.

> as for the reason to not call it out, to give the good guys a chance to
> apply fixes before the bad guys are writing exploit code.

Obscurity doesn't help, and it only makes the matter worst. Have you ever thought about the possibility that the good guys could also miss applying the security fixes just because of the obscurity in the changelogs?

Kernel security, year to date

Posted Sep 11, 2008 7:35 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

>> if you don't want to only apply security fixes to get a secure system,
>> why do you need the big red flag that a patch is a security fix? if you
>> are applying all bugfixes then you will get the security fixes along with
>> everything else.

> You wouldn't want to introduce possible instability to the system by
> applying other non-security fixes and enhancement

and I am saying that trying to do this means that you will miss security fixes becouse at the time they were created nobody realized that they fixed security issues.

becouse of this good guys need to apply all the patches, not try to cherry-pick the ones that they think are 'important'. If they want to do so (distros for example), they need to investigate _every_ patch to see if it has security implications or not. tagging some of them as having security implications strongly implies that ones that are not tagged do not have security implications, and that is incorrect.

even if all the commit says is 'this is important for security' the fact that the details of the fix are directly attached to the comment makes it pretty easy for the bad guy to focus their exploit effort.

I think that the reduction in effort is greater for the bad guys than it is for the good guys.

Kernel security, year to date

Posted Sep 11, 2008 9:37 UTC (Thu) by PaXTeam (subscriber, #24616) [Link]

> and I am saying that trying to do this means that you will miss security
> fixes becouse at the time they were created nobody realized that they
> fixed security issues.

why does that imply that *known* security fixes shouldn't be marked as such? you're thinking in black&white terms again and again, and can't imagine that kernel security isn't one such thing.

> because of this good guys need to apply all the patches, not try to
> cherry-pick the ones that they think are 'important'.

no, they do not. in fact, judging by the amount of reverted commits, they're a lot better off by being careful. on the other hand i don't recall a security fix being reverted so their quality is a lot better than non-security fixes.

> If they want to do so (distros for example), they need to investigate
> _every_ patch to see if it has security implications or not.

they have to do it regardless for non-security implications as well.

> tagging some of them as having security implications strongly implies
> that ones that are not tagged do not have security implications, and
> that is incorrect.

why does it imply that? because you say so? ;) in the real world out there, people maintaining kernels for their users are not dumb and they know that any security labeling can at most be best effort, not a guarantee.

> even if all the commit says is 'this is important for security' the fact
> that the details of the fix are directly attached to the comment makes
> it pretty easy for the bad guy to focus their exploit effort.

why do you think they *need* such explicit marks to begin investigating a commit? think about it, a known security fix is very likely to get applied/backported by the good guys (i.e., its value is reduced for the attacker) whereas an unknown one (meaning, not known as a security fix at commit time) has a much better chance of remaining unfixed elsewhere therefore the bad guys already have to manually scan every commit *anyway*. not to mention the fact that they scan commits and find exploitable bugs in them when the bugs are introduced (think vmsplice or suid coredump), not when the security bugs are fixed. therefore it's really irrelevant for the bad guys whether the security fixes are marked or not. in fact, if there's any impact for them, they're probaly very grateful for the kernel devs for not marking security fixes because that increases their chances of exploiting a bug that is more likely to remain unfixed in their target's systems.

> I think that the reduction in effort is greater for the bad guys than it
> is for the good guys.

there's *zero* reduction in effort for the bad guys. if you have evidence otherwise, i'm all ears.

Kernel security, year to date

Posted Sep 11, 2008 20:52 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

I am not saying that it's inherently evil to label a security fix as such, but I am saying that it's also not inherently evil to not label a security fix.

there are very few reverted commits after a release (excluding -rc releases). If you are running a pre-release kernel then you have bigger problems, but if you stick with the released kernels (especially with the -stable releases) then there are very few reverted commits. so I don't see how that is relevant.

>> tagging some of them as having security implications strongly implies
>> that ones that are not tagged do not have security implications, and
>> that is incorrect.

> why does it imply that? because you say so? ;) in the real world out
> there, people maintaining kernels for their users are not dumb and they
> know that any security labeling can at most be best effort, not a
> guarantee.

you can't have it both ways. either the tagging of patches as security issues is accurate (so that you can have a secure system by only applying those patches and not applying the other fixes), or it's not (in which case you need to apply all patches to get all the possible security fixes)

> there's *zero* reduction in effort for the bad guys. if you have evidence
> otherwise, i'm all ears.

bad guys don't need a security park to investigate a commit, but if things have a security mark they know that they will find vulnerabilities in those commits, there is no chance that they will investigate the implications and then discover that they wasted their time.

it's like discovering that a high-security building has an unlocked door where your only communication with the building managers is public with an unknown amount of time before the owners see the message.

you can say 'door X is vunerable' in which case the bad guys can go directly to that door and get in.

or you can say 'I found a unlocked door' in which case the building owner needs to check every door (potentially finding other problems), but the bad guy would probably have to try several doors before finding the open one, and in a high-security building there would be cameras on the doors giving the building owner some chance of spotting the bad guy before they get in, even without seeing the message

I know this isn't a perfect analogy, but it does illustrate why full public exposure of all the details of every security problem is not nessasarily a good thing. with commits, the details are by definition available (the code changes are visible to everyone), so it can be a reasonable approach to not then hold up a sign that says 'target this'

Kernel security, year to date

Posted Sep 12, 2008 2:28 UTC (Fri) by PaXTeam (subscriber, #24616) [Link]

> so I don't see how that is relevant.

that'd be because you carefully cut the part i quoted from you and responded to. let's see it again:

> because of this good guys need to apply all the patches, not try to
> cherry-pick the ones that they think are 'important'.

you're saying that all patches come in equal quality. if that were the case, reverts wouldn't need to occur yet they do, that was my point. then there're the regressions that may or may not result in a revert, but in any case they show that one cannot blindly backport all patches.

> you can't have it both ways. either the tagging of patches as security
> issues is accurate

what does accurate mean here? that only security issues are marked as such, and all of them at that? who has that kind of expectation? except citizens of your (non-existent) black&white world, i mean.

beyond that, why do developers mark patches that can result in, say, file system corruption? is that accurate in the sense you meant it above? hint: it's *not*, any kernel heap overflow or similar memory corruption (not to mention several other classes of security bugs) can very well result in such file system (meta)data corruption but that fact is pretty much never mentioned in a commit. yet developers don't stop let alone argue about marking file system corruption fixes as such. how do you explain that discrepancy?

> (so that you can have a secure system by only applying those patches and
> not applying the other fixes),

again: there's no such thing as a secure system. if you believe you have one, give remote shell access to the internet and see how long it will survive, despite having all your precious patches applied. so coming back to reality, people strive for making a balance (between having an acceptable risk and having to put effort into backporting/applying patches), and marking security fixes helps that process.

> or it's not (in which case you need to apply all patches to get all the
> possible security fixes)

are you saying that applying all patches (whatever that means by the way) will result in a secure system? seriously? why don't you give remote shell access to your most important personal box(es) to everyone? could it be because, horribile dictu, you don't actually consider your system (with all those patches applied) secure enough to withstand attacks?

> bad guys don't need a security park to investigate a commit, but if
> things have a security mark they know that they will find
> vulnerabilities in those commits, there is no chance that they will
> investigate the implications and then discover that they wasted their
> time.

did you even read what you responded to? bad guys do *not* care one whit about explicitly marked security fixes because their value is quickly diminishing (as people apply them). the real value is in bugs that either aren't fixed at all or whose fixes aren't widely known and hence not propagating as well as they could.

last but not least, since security fixes aren't marked as such because of thorough and careful examination of impact, there's a very good chance that a given bug cannot be exploited beyond DoS. that has about no value as compared to full privilege escalation bugs. so no, marking a commit as security related doesn't automatically hand anyone a full blown weaponized exploit, it takes a lot of time to reach that stage (or determine why it's not possible) and you save no time for the bad guys by not explicitly calling out attention to security fixes.

> I know this isn't a perfect analogy, but it does illustrate why full
> public exposure of all the details of every security problem is not
> necessarily a good thing. with commits, the details are by definition
> available (the code changes are visible to everyone), so it can be a
> reasonable approach to not then hold up a sign that says 'target this'

real life analogies *never* hold up in the digital world, yours is bleeding from so many wounds that it's not even funny.

1. a high-security building has been built as such, nothing remotely similar can be said of linux. save for L4 derivatives and coyotos, i can't really think of any contemporary public effort in fact.

2. an unlocked door is trivial to (ab)use, nothing remotely similar can be said of contemporary kernel bugs. case in point, the vmsplice exploit that abused not one but two bugs actually, and the kernel devs with all their knowledge of the kernel managed to miss the second (and more important) one initially.

3. building managers' role is nothing remotely similar to that of kernel developers. builders would be a closer match, except a building will eventually be finished (because it's built to a plan) whereas the kernel never will.

that's a couple of fundamental flaws and i haven't even finished parsing your first sentence. as for what i quoted above, you have yet to explain why bad guys would target explicitly marked security fixes in the first place (i told you why they wouldn't) and why they'd be helped by those marks (i told you why they wouldn't).

Kernel security, year to date

Posted Sep 12, 2008 9:23 UTC (Fri) by nix (subscriber, #2304) [Link]

Security fixes *do* introduce bugs, which do have to be fixed later: see,
e.g. ff9bc512f198eb47204f55b24c6fe3d36ed89592. But obviously this is done
by fixing the bug, not by reverting the security fix!

Kernel security, year to date

Posted Sep 12, 2008 9:30 UTC (Fri) by eteo (guest, #36711) [Link]

> Security fixes *do* introduce bugs, which do have to be fixed later: see,
> e.g. ff9bc512f198eb47204f55b24c6fe3d36ed89592. But obviously this is done
> by fixing the bug, not by reverting the security fix!

That's for sure, and I can give you many more examples too. But the good thing about such a regression fix is that it usually mentions the commit hash that introduced the problem. This is quite different from bugs that are security-relevant, and yet have nothing related mentioned in the changelogs.

Kernel security, year to date

Posted Sep 12, 2008 9:52 UTC (Fri) by nix (subscriber, #2304) [Link]

Well, there's 91b80969ba466ba4b915a4a1d03add8c297add3f and
27df6f25ff218072e0e879a96beeb398a79cdbc8 from the current stable tree. Now
neither actually say the Magic Word 'security', but anyone who's using an
upstream kernel who doesn't recognise that a buffer overrun is a security
concern *deserves* to be broken into for utter stupidity, IMNSHO.

They don't have CVE numbers and perhaps the authors didn't even bother to
isolate the commit that introduced the problem. How terrifying, I'm sure
the fix is much worse as a consequence.

Naturally some bugs have nothing mentioned in the changelogs: not everyone
cares to mention them, not everyone who fixes such a bug knows it is
security fixes at the time they're fixed, and so on.

Haven't we done this whole tiresome argument before? :/

Kernel security, year to date

Posted Sep 12, 2008 10:03 UTC (Fri) by eteo (guest, #36711) [Link]

> They don't have CVE numbers and perhaps the authors didn't even bother to

They have CVE names now. CVE-2008-3915 for commit 91b80969, and CVE-2008-3911 for commit 27df6f25.

> isolate the commit that introduced the problem. How terrifying, I'm sure
> the fix is much worse as a consequence.

I don't really understand what you are trying to say.

Kernel security, year to date

Posted Sep 12, 2008 10:10 UTC (Fri) by nix (subscriber, #2304) [Link]

The drumbeat here has been that security problems which aren't a)
identified as such with the magic word 'security' and b) don't have CVE
numbers shouldn't even have their fixes committed in case the bad guys
spot the fix (as far as I can tell). I'm trying to point out that even
when they're not identified as such, it's often quite easy to identify
them.

Kernel security, year to date

Posted Sep 12, 2008 23:34 UTC (Fri) by bfields (subscriber, #19510) [Link]

They don't have CVE numbers and perhaps the authors didn't even bother to isolate the commit that introduced the problem.
09229edb68a3961db54174a2725055bd1589b4b8 and dc9a16e49dbba3dd042e6aec5d9a7929e099a89b.
How terrifying, I'm sure the fix is much worse as a consequence.

I don't think knowing the original commits would help much with the fixes in this particular case, but if you see any problems, speak up. I agree that including the commit id's of the original commits would have been a good idea, and I'll try to do that in the future.

And if I could make a request for next time: could you please (please!) respond by email instead of lwn comments? Preferably cc'd to the relevant public lists, but if for some reason you just can't stand the idea of sending email to vger lists, then private mail will work too.

Kernel security, year to date

Posted Sep 12, 2008 23:46 UTC (Fri) by nix (subscriber, #2304) [Link]

I didn't email you about this because I didn't think you'd done anything
which needed to change: you fixed a bug, and that's great. Obviously you
knew these fixes had security implications because you said so, and, to
me, that's enough.

(I *was* being somewhat sarcastic. Of course the fix isn't worse because
of the wording of the log message! :) )

Kernel security, year to date

Posted Sep 13, 2008 2:51 UTC (Sat) by bfields (subscriber, #19510) [Link]

you fixed a bug, and that's great.

Yeah, well, but I'm also the one that introduced the more serious of those two bugs (and failed to catch the other in review). Urgh.

I *was* being somewhat sarcastic.

OK! I think it's a reasonable request to include the commit id's that introduced the bugs, though.

Kernel security, year to date

Posted Sep 13, 2008 3:00 UTC (Sat) by bfields (subscriber, #19510) [Link]

(And, right, sorry, I see the sarcasm now. I got a little lost in the conversation there. More sleep needed!)

Kernel security, year to date

Posted Sep 12, 2008 3:22 UTC (Fri) by eteo (guest, #36711) [Link]

> because of this good guys need to apply all the patches, not try to
> cherry-pick the ones that they think are 'important'. If they want to do
> so (distros for example), they need to investigate _every_ patch to see
> if it has security implications or not. tagging some of them as having
> security implications strongly implies that ones that are not tagged do
> not have security implications, and that is incorrect.

Realistically, most good guys don't do that. As I mentioned in my previous reply, applying all patches may actually introduce possible instability and/or additional security bugs to the system.

> even if all the commit says is 'this is important for security' the fact
> that the details of the fix are directly attached to the comment makes
> it pretty easy for the bad guy to focus their exploit effort.

Obscurity does not prevent the bad guys from focusing their exploit effort. It only slows them down a little. By making the commit of security-relevant bugs a little more obvious, it may actually reduce the value of these vulnerabilities.

Kernel security, year to date

Posted Sep 10, 2008 22:26 UTC (Wed) by bojan (subscriber, #14302) [Link]

> as long as your mechanic fixes both, why would you care that they wereboth just single line-items in your bill?

You missed the key words: "assign the same importance".

I (and most other people) care more that the brakes work, because they are more important. And I also care more that security related bugs get fixed, because, just like with fixing brakes, failure to do so has a potential to cause more damage.

Of course, Linus and people supporting his view (that contradicts common sense) will come up with complicated philosophical reasons as to why they are right. All of that cannot change the above simple fact.

Kernel security, year to date

Posted Sep 10, 2008 9:10 UTC (Wed) by nix (subscriber, #2304) [Link]

Actually, some systems (even Linux systems) have in the past used the random number sources on sound cards as a source of randomness (not terribly good sources, there's all sorts of rhythmic electrical noise in there, but still they're sources). If that's a system's only source of entropy, and an attacker makes the sound card stop working, you've now got an entropyless system. A good few things will stall forever in such circumstances -> DoS.

(Sure, it's contrived: perhaps the only thing that saves us is that the contrived part is the original setup of the system, which isn't something an attacker can easily contrapt.)

Kernel security, year to date

Posted Sep 10, 2008 15:25 UTC (Wed) by willy (subscriber, #9762) [Link]

Failing to kfree() memory that was previously kmalloced() isn't a security hole, just a memory leak that will eventually force you to reboot.

Kernel security, year to date

Posted Sep 10, 2008 23:49 UTC (Wed) by nix (subscriber, #2304) [Link]

If an external attacker can force you to do that it's a DoS. (But you knew
that.)

Economic view

Posted Sep 10, 2008 5:04 UTC (Wed) by jamesmrh2 (guest, #31680) [Link]

I think the underlying problem is the lack of incentives for people to perform code review.

It's very time-consuming and difficult to do properly; displaces time and energy that people want to use to create new code (because there are incentives for that); and there's not much in the way of recognition.

There must be some way to recognize the efforts of reviewers. Perhaps we need a "Bug-found-by:" tag, and for stats on that to be used as part of the various reports on who is contributing to the kernel. It might also be worthwhile to have some kind of live scoreboard at kernel.org or lwn.net.

Economic view

Posted Sep 10, 2008 5:26 UTC (Wed) by eteo (guest, #36711) [Link]

There's a "Reported-by:" tag that one can use.

Economic view

Posted Sep 11, 2008 7:54 UTC (Thu) by Cato (subscriber, #7643) [Link]

I agree about providing more incentives - reviewing needs to be seen as being as valuable as writing the code itself, or nearly so. Some cultural change required perhaps...

Kernel security, year to date

Posted Sep 13, 2008 13:43 UTC (Sat) by cde (guest, #46554) [Link]

I've said it before, but if Linux had moved to a 4G/4G + micro-kernel architecture, most of these vulnerabilities would not have been a big deal. Alas, I don't see the kernel moving to a more secure architecture any time soon.

Kernel security, year to date

Posted Sep 14, 2008 10:36 UTC (Sun) by PaXTeam (subscriber, #24616) [Link]

UDEREF in PaX is a much better alternative.

Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds