User: Password:
|
|
Subscribe / Log in / New account

Critical Vulnerabilities in Samba

Critical Vulnerabilities in Samba

Posted May 17, 2007 4:39 UTC (Thu) by jwb (guest, #15467)
Parent article: Critical Vulnerabilities in Samba

Does the NULL checking bug affect Samba on Linux? I've always heard that memory allocations never fail on Linux, but they cause segmentation errors when the failed allocations is read or written. Which, frankly, seems reasonable because there are no practical ways to handle failed memory allocation.


(Log in to post comments)

Critical Vulnerabilities in Samba

Posted May 17, 2007 5:09 UTC (Thu) by thedevil (guest, #32913) [Link]

>>I've always heard that memory allocations never fail on Linux<<

They can if you have this in /etc/sysctl.conf:
vm.overcommit_memory=2

Critical Vulnerabilities in Samba

Posted May 17, 2007 5:43 UTC (Thu) by ncm (subscriber, #165) [Link]

Very large allocation requests (e.g. bigger than RAM + swap) can fail immediately.

Critical Vulnerabilities in Samba

Posted May 17, 2007 10:00 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

Some very conservatively written pieces of software do handle failed memory allocation. It's easier (possible) to do this if for your software 'nothing happens' is considered to be an acceptable consequence of such a dire problem. e.g. as far as I know the 'init' process will simply fail to launch a new process, wait a while and try again later. Several other daemons have been written so that their behaviour degrades gracefully if allocations start failing after they reach their idle state.

For user interactive application software you're right that it's normally doom, if you can't get memory to draw a picture, you may not be able to get enough memory to pop up a dialog which says "Out of memory" either.

In any case it's not acceptable for a serious security problem to occur as a result of lack of available memory. At worst this should cause a temporary denial of service.

Memory shortage

Posted May 17, 2007 14:18 UTC (Thu) by dark (guest, #8483) [Link]

One technique of dealing with that is to allocate an emergency reserve of
memory when the program starts up. Then if you run out of memory, start a
graceful shutdown process while allocating from that emergency reserve.

The main difficulty is that library functions won't know about your
reserve. If you're on intimate terms with your malloc implementation, you
can get around that by actually freeing the emergency reserve, and relying
on malloc to keep the memory around and allocate from that space.

Of course, it wouldn't be a good idea for every program to do this. Then
you'd run out of memory. ;)

Critical Vulnerabilities in Samba

Posted May 18, 2007 2:51 UTC (Fri) by walters (subscriber, #7396) [Link]

And in any case, the one of the least useful things to do is pop up a dialog that says "Out of Memory [Ok]", even if you could.

Critical Vulnerabilities in Samba

Posted May 18, 2007 13:12 UTC (Fri) by cortana (subscriber, #24596) [Link]

To be fair, that is preferable to what many Linux applications do: exit immediatly and silently, with no explanation.

Critical Vulnerabilities in Samba

Posted May 21, 2007 10:29 UTC (Mon) by dion (guest, #2764) [Link]

Nope.

There are few things worse than applications that hang in stead of failing.

I'd much rather have the application crash so it can be restarted than have it wait for an operator to log in and manually shut it down.

Anything that could ever be automated must be able to crash rather than hang, so the "Application has run out of memory" dialog must be an external post-crash handler that can be turned off if not needed.

Granted, GUI applications that are 100% unscriptable might get away with assuming that there is a warm body in the chair in front of it, but I like to think those are few and far between.

Critical Vulnerabilities in Samba

Posted May 18, 2007 8:15 UTC (Fri) by xoddam (subscriber, #2322) [Link]

> I've always heard that memory allocations never fail on Linux

Depends entirely how you allocate the memory! The standard malloc() in
glibc will rarely fail, large allocations result in new anonymous and
unallocated pages supplied by mmap(), but it *is* possible to run out of
virtual memory space for new allocations in a long-running daemon,
especially if it has many threads (which fragment the vm space).

On the other hand many applications use alternative allocators that are
more likely to return NULL. It is possible to refuse to overcommit memory
in the kernel, and to try to impose a realistic maximum heap size in
glibc.

> there are no practical ways to handle failed memory allocation.

A good system is designed robustly enough to cope with failure. In a
network daemon dropping packets or closing connections once in a while is
just fine; performance will suffer but nothing ought to break.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds