Linux in the news
All in one big page
See also: last week's Letters page.
Letters to the editor should be sent to email@example.com. Preference will be given to letters which are short, to the point, and well written. If you want your email address "anti-spammed" in some way please be sure to let us know. We do not have a policy against anonymous letters, but we will be reluctant to include them.
January 17, 2002
From: Mark J Cox <firstname.lastname@example.org> To: email@example.com Subject: mutt example in LWN Date: Thu, 10 Jan 2002 11:07:33 +0000 (GMT) "before sneering too hard. Linux distributors have done a good job at rushing out fixes for the remotely exploitable vulnerability in the widely-used mutt mailer. That vulnerability is, of course, a buffer overflow problem." Hiya; Although there are other examples of remotely exploitable vulnerabilities, the mutt vulnerability you cite is a bad example. In this case, according to the mailing lists, a remote attacker can cause a NULL to be written to an arbitrary space in memory. I think it's unlikely that this could be crafted to give remote access to a machine. Also, unlike the windows overflow, for the mutt vulnerability to write this NULL to arbitrary memory it requires an attacker to send a crafted mail message that is read by the root user running a vulnerable version of mutt. Given all this, it's not a particularly serious vulnerability. Good software design can stop buffer overflows altogether. Apache was desgined to have a resiliant pool-based memory management system, and in the history of Apache 1.3 there have been no vulnerabilities due to buffer overflows or that are particularly serious. See http://www.apacheweek.com/features/security-13 Cheers, Mark -- Mark Cox / Red Hat Europe / OpenSSL / Apache Software Foundation firstname.lastname@example.org //// T: +44 798 061 3110 //// F: +44 845 333 9533
From: Zooko <email@example.com> To: firstname.lastname@example.org Subject: automatically prevent buffer overflows without giving up C/C++ Date: Thu, 10 Jan 2002 06:30:08 -0800 Folks: I'm surprised you didn't mention libsafe: http://www.research.avayalabs.com/project/libsafe/ I haven't used it yet, but apparently it can be applied at program-load time to *object* code without needing access to the source code, and it prevents all buffer overflow attacks. Why isn't this standard equipment on every Linux distribution? Possibly because it is new and people don't know about it yet. Possibly because it imposes some tiny performance penalty. Regards, Zooko --- zooko.com Security and Distributed Systems Engineering ---
From: Sid Boyce <email@example.com> To: firstname.lastname@example.org Subject: RE: It is time to be done with buffer overflows Date: Thu, 10 Jan 2002 15:33:43 +0000 I have been using "libsafe" (supplied by Lucent technologies) since version 1, version 2 offers protection, not only against buffer overflows, but format strings. I don't know how effective libsafe is; there was a dismissive/hostile response to it from SuSE, along the lines that it did not offer comprehensive protection. In my own experience, I had one application I compiled here that just did not run and on examination of /var/log/warn, I discovered the problem was a buffer overflow, I emailed the author and it was fixed in a day by issue of an updated source release. Then there was IBM's JDK-1.3 which similarly failed and I went back to using Blackdown's Java. I wonder if the reluctance to deploy libsafe is thought to be an easier course to follow than perhaps to deliver applications that may simply not run and for the distributions to have to deal with those i.e it's easier to deal with a problem the customer hasn't seen, but that could be disastrous, than to deal with a "XYZ doesn't work here" call from many of your customers. Regards -- Sid Boyce ... hamradio G3VBV ... Cessna/Warrior Pilot Linux only shop
From: "John D. Hardin" <email@example.com> To: firstname.lastname@example.org Subject: Re: 1/10/02 Front Date: Wed, 9 Jan 2002 21:44:55 -0800 (PST) "It is time to be done with buffer overflows." Surely you've heard of Immunix and the StackGuard compiler? While not a cure for buffer overflows, it makes their existence less of a critical problem during the time the code is undergoing security audit. -- John Hardin KA7OHZ ICQ#15735746 http://www.impsec.org/~jhardin/ email@example.com pgpk -a firstname.lastname@example.org 768: 0x41EA94F5 - A3 0C 5B C2 EF 0D 2C E5 E9 BF C8 33 A7 A9 CE 76 1024: 0xB8732E79 - 2D8C 34F4 6411 F507 136C AF76 D822 E6E6 B873 2E79
From: email@example.com (Bryan Henderson) To: firstname.lastname@example.org Subject: buffer overruns - helpful tool Date: Thu, 10 Jan 2002 14:36:18 -0800 Your editorial last week talks about the annoyance of buffer overruns prevalent on Linux systems, and how the heavy use of C makes them common. Indeed, programming to avoid buffer overruns in C is monotonous, and I really don't blame anyone for ignoring that possibility in a large work of free software. Until it is practical to do all code in high level languages, though, I have a suggestion to avoid buffer overruns in C: asprintf(). asprintf() is a surprisingly little-used GNU C library routine. It's special to the GNU library, so you can use it only in Linux-only code. But if you can limit yourself to Linux, asprintf() makes C programming almost as easy as in a string language, and saves you from having to think about buffer overruns. asprintf() is just like sprintf(), except that it allocates the space for the result string. So your buffer is never too small. The only thing you have to do, reminding yourself that you'd still rather be using a high level language, is free the memory after you use the string. (The next best thing, for code that must run without the benefit of the GNU C library, is the now prevalent feature of snprintf() where it tells you how much space your result _would have_ required when it doesn't fit in the space you provided. You can use that to do a separate malloc() and make your own asprintf().) Also, make liberal use of macros like this: #define STRSCPY(A,B) \ (strncpy((A), (B), sizeof(A)), *((A)+sizeof(A)-1) = '\0') This makes it painless to copy a string from B to A without any possibility of overrunning your A array. -- Bryan Henderson Phone 408-621-2000 San Jose, California
From: Andrzej Kukula <email@example.com> To: firstname.lastname@example.org Subject: Buffer overflows Date: Fri, 11 Jan 2002 12:57:45 +0100 There are at least one good technique of writing code that doesn't contain any buffer overflows. You may see it in qmail server and other programs written by prof. Daniel Bernstein (http://cr.yp.to). It's based on very simple yet powerful string library called "stralloc", and requires very high coding discipline. Let me remind that since the first qmail release in 1996, no-one found any buffer overflow in it, despite the fact that there was a money prize (see http://cr.yp.to/qmail/guarantee.html). There's also very secure DNS server from prof. Bernstein, "tinydns", which is also based on this library. The library has many advantages: - strings are binary - this means that there may be \0s in the middle, - string length is limited only by memory, - library is mature - contains complete orthogonal set of functions for string manipulation, - library is portable across UN*X. Download qmail and see examples of good engineering! I can hardly imagine programmers rewriting their apps to use "stralloc", I just want to say that the stralloc library, together with other libraries from prof. Bernstein, is a very good foundation to write error-free programs. Regards, Andrzej Kukula
From: Lars Wirzenius <email@example.com> To: firstname.lastname@example.org Subject: Buffer overflows in C Date: 11 Jan 2002 17:33:41 +0200 You will probably get a pile of letters suggesting this, but just in case you don't: Buffer overflows are, indeed, a common problem with C programs. It is just too darn easy to mismanage memory allocation when doing string processing in C. If switching to a more highlevel language is not an option, one can still improve things while staying with C. The key is to avoid using raw C character arrays (whether allocated statically or dynamically) directly, and instead use an abstraction layer. A simple one is implemented in the Glib library; see http://developer.gnome.org/doc/API/glib/glib-strings.html for a description. Glib also includes some functions to help deal with normal C strings but hide many allocation details in functions; see http://developer.gnome.org/doc/API/glib/glib-string-utility-functions.html. Using either of these should help reduce buffer overflows a bit. I wrote a somewhat more ambitious abstraction for the Kannel project; see http://liw.iki.fi/liw/octstr.txt for one version of the interface. The trouble with this approach was that pretty much everything related to string processing had to be re-implemented, since none of the standard libraries would deal with my abstraction. (Actually, we gave up and implemented a way to access the raw C character array within the abstraction to be able to use certain parts of the standard library.) It is my opinion that even using a limited and incomplete abstraction, such as any of the above, will help reduce buffer overflows tremendously. In fact, they even make programming easier and more fun, since you don't have to worry about minute details of memory allocation every time you process a string. (Myself, I prefer to use a higher level language when possible, but the huge numbers of tools that will work with C, but not with, say, Python, does not always make this practical.) -- "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty not safety." -- Benjamin Franklin, 1759
From: Miles Elam <withheld on request> To: email@example.com Subject: Modern C++ doesn't have the same problems Date: Fri, 11 Jan 2002 15:07:08 -0800 While legacy C++ may have had as hard of a time fighting the dreaded buffer overflow as the language upon which it was originally based, modern C++ implementations have done much to help the programmer avoid such oversights. Case in point, if you see the following in C++ char *foo; foo = (char*)malloc(12); strcpy( foo, "Hello" ); strcat( foo, " World" ); int length = strlen( foo ); free( foo ); or such relics from C as strcmp, realloc, qsort, etc. then you will eventually have problems. In modern, standard C++ you will more likely see the following std::string foo; // foo.reserve( 12 ); // Optional if you want to avoid memory reallocation and keep up with the C version foo = "Hello"; foo += " World"; int length = foo.size(); And if you were to compile these, you would find little measurable difference (if any) in code size or speed. Go ahead! I dare you! And note that if the former were reading a user-generated string, a buffer overflow exploit is quite likely without extra runtime checks and overhead. The C++ version has no similar problem, and you don't have to explicitly bother with the heap, dynamic memory allocation, and the dreaded memory leak when deallocating memory. All of the speed and (almost) none of the headaches. Standard C++ was ratified in 1998. Lumping C and C++ together is as outdated and wrongheaded as saying Linux has no support whatsoever for USB devices. After all, it was true in 1998, and there are still plenty of installations out there that still don't support USB. I hate to be a language bigot, and I truly believe that C, Java, C++, Perl, Python, et al have their own niche and their own set of strengths and weaknesses. But the longer the belief that C++ is C with extra unnecessary complexity is allowed to stand, the longer die-hard C programmers who refuse to use something "slower" will avoid it and its protection against the buffer overflow attack. A good article on this topic is Bjarne Stroustrup's "Learning Standard C++ as a New Language" (http://www.research.att.com/~bs/new_learning.pdf) As a counterargument against the "just be more careful in C," being careful all of the time is not realistic. How many times have we accidentally dropped a plate while doing the dishes or locked ourselves out of the car or house? People are fallable and therefore, so is the software created by people. C++ is a logical "other language" for people to move toward if they already know C. It follows a closer programming model than a language like Java with its JVM and fundamentally different focus. Don't criticize C++ too unfairly. And while I'm here, I'd like to mention that all copies of "Practical C++ Programming" published by O'Reilly should be used for kindling. It's about time they came out with a new edition. Too many people buy that book on the good name of the publisher only to be forever turned off the language for the worst reasons. - Miles Elam
From: "Dan Maas" <firstname.lastname@example.org> To: <email@example.com> Subject: Buffer overflows Date: Sat, 12 Jan 2002 19:50:31 -0500 "...anybody contemplating a new development should think long and hard about using an implementation language that is inherently resistant to buffer overflows. Many such languages exist (consider Python, Perl, Ruby, Java, etc.)..." One must keep in mind that while these languages are indeed resistant to buffer overflows, this very feature makes them vulnerable to memory-exhaustion denial-of-service attacks. (since the language runtime presumably allocates additional memory when strings need to grow longer). e.g. a C programmer might write: char *a, *b, c; sprintf(c, "%s%s", a, b); /* potential overflow! */ while a Python programmer might write: c = a + b # no chance of overflow, but allocation of space for c could fail if a and b are large and memory is exhausted In other words, no language runtime can automatically eliminate the class of bugs that results from trusting input too much. The programmer cannot avoid spending time and effort to ensure that the code handles malicious input gracefully (e.g. by using snprintf() in C, or wrapping the Python statement in a 'try' block to catch memory exceptions). Regards, Dan
From: Adam C Powell IV <firstname.lastname@example.org> To: email@example.com Subject: Buffer overflows and hardware/software diversity Date: Tue, 15 Jan 2002 12:39:35 -0500 To the editor: Thank you for your excellent editorial on buffer overflows (1/10/02 main page), in which you rightly decry the unfortunately common buffer overflow problems in both open and proprietary software. You offer as solutions thorough auditing of code and more widespread use of languages which do not suffer from this problem (though these alternative languages are only as secure as their implementations). I do not know the details, but from what I have heard certain kernel modifications such as can be found in NSA SELinux can offer additional protection. I would like to offer one more solution which we in the Free Software community (and Linux in particular) are in a unique position to use: security by platform diversity. When a buffer overflow problem is reported, the first exploits are (almost) always written for i386 and compatibles. Those of us who run Linux on PowerPC, Alpha, Sparc, ARM and other platforms are thus inherently immune to takeover via those particular exploits. And though it is possible for an attacker to write other exploits for these alternative platforms, it is certainly not easy to do so. Debian in particular shines as a cross-platform distribution: potato was released for six platforms, and there are eleven platforms with at least 7000 packages in woody (Alpha, ARM, HPPA, IA-32, IA-64, M68k, MIPS, MIPSel, PowerPC, S/390, Sparc, with over 5000 packages for Hitachi SuperH in unstable). Having watched the demise of the once-mighty but closed-source Amiga, having seen Apple declare obsolescence of generation after generation of old Mac hardware, and Microsoft abandon platform upon platform for (planned) Windows NT support, having heard Sun's recent announcement of the end of Solaris/x86, I can quite confidently state than nowhere in the proprietary world will there ever be anything close to the level of platform diversity that we have in our community. The classic cycle of "closed-source -> not maintained -> abandoned -> insecure -> dead" simply does not exist in our world: as long as there are user/developers on a given platform, it will survive and even thrive with thousands of software upgrades and new releases every year. There are of course limitations to security by hardware diversity. One is that running, say, wu-ftpd on ARM protects the machine from hostile takeover using a wu-ftpd i386 buffer overflow exploit, but does not protect it from a server crash or other DoS use of the exploit. Another is that it may not be easy to translate an exploit to a different processor architecture, but for a good assembly coder, it's not *that* hard either, once the exploit is known. So this could in a way be considered a form of "security by obscurity" which buys hours' or days' worth of time (cf. your piece a few months ago on potential lightning worms which propagate across the entire 'net in 15 minutes) but does not *guarantee* protection. Software diversity, on the other hand, does provide such a guarantee against these weaknesses. For example, whereas Microsoft ships just one (notoriously insecure) http server, Debian has *nine* in unstable, along with multiple ftpds and two sshds, and the default mail transport agent is *not* sendmail. In addition to Linux, Debian has Hurd in an advanced state, and even experimental FreeBSD, Darwin/MacOSX, and (shudder) Win32 ports in the works for kernel diversity. Viewed in this light, the GNOME/KDE/GNUStep etc. diversity gives more strength to our community than just the competitive stimulus which they provide -- not to mention Netscape4/Mozilla/Konqueror/Galeon, KMail/Evolution/NSMail/Mutt/Balsa, etc. All of this diversity makes life very difficult for even a truly gifted cracker who wants to bring down the free software community, and reduces to highly improbable your prediction that the Linux community will suffer a catastrophic security problem in 2002 on the scale of those which afflicted Microsoft in 2001 (and 2000 and 1999 and...). So diversity of hardware can offer protection from hostile takeover via buffer overflows, at least for a time. Software diversity does even better, by limiting the machines (or users within a machine) which can be compromised to those which run the vulnerable implementation of a given service. In this light, the monocultures of Microsoft and even Apple and Sun make those companies treacherously vulnerable to catastrophic consequences of buffer overflows, as we have seen. On a smaller scale, this calls into question RedHat's decision to no longer provide a "complete operating system" for Alpha, and Rebel.com's switch from ARM to i386-compatible Crusoe in the Netwinder firewall/server product line. It is unfortunate that many of the architecture ports exist mainly to service legacy machines: with even Alpha and in some ways IA-32 scheduled for phaseout, only IA-64, PowerPC, Sparc, S/390 and perhaps ARM and SuperH remain under active development. Then again, nowhere outside the Free Software community is any software maker positioned to take advantage of even half of this wonderful plethora of hardware, and even legacy hardware platforms will remain quite capable of meeting security-sensitive server and router/firewall needs for a great many users for the indefinite future -- but only if they run Free Software! -- -Adam P. GPG fingerprint: D54D 1AEE B11C CE9B A02B C5DD 526F 01E8 564E E4B6 Welcome to the best software in the world today cafe! <http://lyre.mit.edu/%7Epowell/The_Best_Stuff_In_The_World_Today_Cafe.ogg>