|
|
Subscribe / Log in / New account

Open-source team fights buffer overflows (ZDNet)

ZDNet looks at OpenBSD, as project leader Theo de Raadt works to eliminate buffer overflows. "The OpenBSD project hopes new changes to its latest release will eliminate "buffer overflows," a software issue that has been plaguing security experts for more than three decades."

to post comments

How depressing is that?

Posted Apr 15, 2003 0:55 UTC (Tue) by metacircles (guest, #8895) [Link] (5 responses)

"After thirty years we've still not managed to persuade people to use a language implementation with array bounds checking, so we're going to cripple the operating system to compensate".

"Protected memory areas", if ZDNet is accurate here, are just going to add another step for any system that does compilation at runtime (e.g. dynamic languages, JIT compilers). Great, let's further penalize the languages which do perform bounds checking, because programmers insist on using C in situations for which it (or they) are not suitable.

How depressing is that?

Posted Apr 15, 2003 7:35 UTC (Tue) by Wol (subscriber, #4433) [Link] (2 responses)

I think one of the things we REALLY need is a bounds-limited C library! Some of the routines take a pointer, then just ASSUME that the caller has allocated sufficient space :-( !!!

In my project, I was going to go through my source and ban any such routines (except I now discover I need one - not too bad since I KNOW the data it is being fed is valid). But the basic trouble is that C (through its libraries) actually ENCOURAGES stupid programming :-(

Cheers,
Wol

How depressing is that?

Posted Apr 15, 2003 13:32 UTC (Tue) by raytd (guest, #4823) [Link] (1 responses)

No. C does not *encourage* stupid programming. It is only tool, and a very good one. Please, don't blame the tool for mistakes made by programmers. I myself have written routines where *I knew* the input would be valid. The truth is, you can't possibly know how a routine will be invoked. What happens when someone else comes along at a later date and expects that you coded the proper safe-gaurds? Nothing but Good Things, I'm sure.

I will agree that if it makes sense to use another language then by all means use it. However, no higher level language will ever be a substitute for discipline and intelligence.

Cheers,
Travis

How depressing is that?

Posted Apr 15, 2003 16:10 UTC (Tue) by piman (guest, #8957) [Link]

I agree that a language can be, at most, as "intelligent" as the person using it, and if you're bad programmer, you're going to write bad code in any language. However, no programmer is perfect, and language choice can greatly aid in avoiding problems.

Cyclone is a dialect of C that introduces new types of pointers (having various safety capabilities), a safer type system, a safer form of memory management (that is still low-level and manually allocated), and automatically adds safety checks (for example, NULL pointer checks). Much of the C standard library was also reimplemented in Cyclone. As an example of how much unsafe code is out there. tn the process of benchmarking it, the porting team discovered several bugs in the benchmark suite they were using; most of the programs they used were incredibly well-tested and reviewed.

C isn't a "bad" language by any means, but it doesn't make writing high-quality, safe code easy, even if you are a good programmer.

How depressing is that?

Posted Apr 15, 2003 13:38 UTC (Tue) by doodaddy (guest, #10649) [Link]

Interesting point. You make me think about all the things the OS supports that could technically be done by a language -- process boundaries, file pointer clean-up on process death, etc. There are many things that a language could handle that the OS "double-checks." This is probably a good thing.

I agree that C buffer overflows are an embarrassment, though. But I think the OS could support such a thing efficiently.

How depressing is that?

Posted Apr 15, 2003 21:49 UTC (Tue) by giraffedata (guest, #1954) [Link]

It's even more depressing that another class of buffer overruns could be eliminated by a simple enhancement of the CPU, and thus eliminate the need for costly coding around of things or runtime bounds checking. But no one seems to have moved on this either.

One source of buffer overruns is where the index into the array is fine, but there wasn't enough memory allocated for the array because the computation (something like "size of element" x "number of elements") overflowed the CPU's word.

On every popular CPU I know, an integer calculation that should yield a really big number silently produces a really little one instead. If it would generate a trap, like divide by zero, the program could properly crash at memory allocation time instead of proceeding on to the buffer overrun.

Crispin Cowan and Solar Designer

Posted Apr 15, 2003 3:54 UTC (Tue) by AnswerGuy (guest, #1256) [Link]

Crispin Cowan, Solar Designer and others should be credited in that article. Two of the techniques they describe in it have been developed for Linux by these programmers.

Crispin Cowan described the use of "canaries" on the stack (in userspace with a patch to the compiler if I recall correctly) in a USENIX paper a few years ago (1999 or so?) on StackGuard. He continued to pioneer similar work in Immunix and through his company WireX

Solar Designer has made Linux kernel patches to implement non-executable stack segments available for years now. These are part of his OWL (OpenWall Linux) project and have been incorporated (at least in part) to LIDS and to the ironically named grsecurity (Get Rewted) patches.

Not that I don't think this OpenBSD announcement isn't newsworthy. It's good that more work is being done to mitigate some of the risk posed by these pernicious and chronic design flaws. However, I believe that credit should be given (an these are the tips of many icebergs of effort over a very long time) and links should be provided to related work. In particular those links my enlighten some readers to the counterpoints --- the downsides of these approaches, and help explain why more hasn't been down sooner.

Ultimately we may see a time when something like EROS (a true capabilities based system) is taken more seriously and perhaps even generally available as an alternative to UNIX-like (identity based) and ACL (access control list) based systems.

Kernel Trap covered this a while ago

Posted Apr 15, 2003 14:21 UTC (Tue) by pflugstad (subscriber, #224) [Link]

See:

http://kerneltrap.com/node.php?id=573

Much more detail on what specifically they are doing.

Pete

Not eliminating overflows

Posted Apr 15, 2003 17:23 UTC (Tue) by ncm (guest, #165) [Link] (1 responses)

They aren't eliminating buffer overflows at all.

Instead, they're making it harder for crackers to exploit those buffer overflows. Arguably that's what matters most, but it's not what the article says. The danger is that it will encourage people to tolerate sloppy programming (which remains as popular as ever) because it's less obviously a source of vulnerabilities.

Not eliminating overflows

Posted Apr 15, 2003 21:31 UTC (Tue) by JoeBuck (subscriber, #2330) [Link]

It would appear that, if these techniques are used to protect a daemon that is listening on an unprotected IP port, they will change an exploitable buffer overflow into a crash. A denial of service attack is less severe than a remote root exploit, but it can still be a problem.


Copyright © 2003, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds