User: Password:
|
|
Subscribe / Log in / New account

Fighting fork bombs

Fighting fork bombs

Posted Mar 31, 2011 14:21 UTC (Thu) by cesarb (subscriber, #6266)
Parent article: Fighting fork bombs

> Keeping the entire history of all processes created over the lifetime of a Linux system would be a costly endeavor. Clearly, there comes a point where history needs to be discarded.

I am failing to see why. You only need to keep the family tree of live processes (thus, branches with only dead leaves can be pruned). You do not need to keep all the inner nodes too; if you have a dead inner node with a single dead children, you can collapse both into a single dead inner node (how many intermediate dead nodes you had does not matter, and even if it did they could be replaced by a counter in the collapsed node). Unless I am visualizing it incorrectly, the worst case then is a binary tree with all the live nodes being the leaves, and so it has a bounded size (which is not that large).


(Log in to post comments)

Fighting fork bombs

Posted Mar 31, 2011 14:54 UTC (Thu) by Seegras (guest, #20463) [Link]

Yes. And keeping the tree it makes even more sense in a forensic context:

You will still know which process spawned what "inetd", even if the parent is long gone from memory or even disk.

Definitly worth some consideration.

Fighting fork bombs

Posted Mar 31, 2011 21:04 UTC (Thu) by dafid_b (guest, #67424) [Link]

Such a tree could be provide a framework for a more user friendly process inspection tool.

Hold in this tree the reason the process was created...
eg
"login-shell" (init hard code)
"Firefox Web Browser" (menu entry text)
"print-spooler"
"Chrome - BBC News Home" (Window title)

Background
I find myself uneasy when evaluating the safety of my system - the process list of 140 odd processes with perhaps 10 recognised, leaves me no wiser..

There are a couple of use-cases I think the above tool could help with
1)
Should I use the browser to transfer cash between bank accounts?
Or should I reboot first?
How can I become more confident of code running on my system?

2)
Was that web-site really benign?
I allowed the site to run scripts in order to see content more clearly...
Has it created a process to execute in the background after I closed the frame?

Fighting fork bombs

Posted Apr 3, 2011 1:58 UTC (Sun) by giraffedata (subscriber, #1954) [Link]

I have long been frustrated by the Unix concept of orphan processes, for all the reasons mentioned here.

If I were redesigning Unix, I would just say that a process cannot exit as long as it has children, and there would be two forms of exit(): kill all my children and exit, and exit as soon as my children are all gone. And when a signal kills a process, it kills all its children as well.

Furthermore, rlimits would be extended to cover all of a process' descendants as well, and be refreshable over time. Goodbye, fork bomb.

There are probably applications somewhere that create a neverending chain of forks, but I don't know how important that is.

Fighting fork bombs

Posted Apr 3, 2011 2:52 UTC (Sun) by vonbrand (guest, #4458) [Link]

Keeping processes around just because some descendent is still running is a waste of resources.

Fighting fork bombs

Posted Apr 3, 2011 19:06 UTC (Sun) by giraffedata (subscriber, #1954) [Link]

Keeping processes around just because some descendent is still running is a waste of resources.

Seems like a pretty good return on investment for me. Maybe 50 cents worth of memory (system-wide) to be able to avoid system failures due to runaway resource usage and always be able to know where processes came from. It's about the same tradeoff as keeping a process around just because its parent hasn't yet looked at its termination status, which Unix has always done.

A process that no longer has to execute shouldn't use an appreciable amount of resource.

Fighting fork bombs

Posted Apr 7, 2011 9:24 UTC (Thu) by renox (subscriber, #23785) [Link]

Currently when the parent exits its memory is totally freed, you're suggestion keeping the whole process until its children exits which can be expensive, maybe a middleground could be more useful ie keep only the 'identity' of the parent process and free the rest.

Fighting fork bombs

Posted Apr 7, 2011 15:16 UTC (Thu) by giraffedata (subscriber, #1954) [Link]

you're [suggesting] keeping the whole process until its children exits which can be expensive, maybe a middleground could be more useful ie keep only the 'identity' of the parent process and free the rest.

I don't think "whole process" implies the program memory and I agree - if I were implementing this, I would have exit() free all the resources the process holds that aren't needed after the program is done running, as Linux does for zombie processes today. But like existing zombies, I would probably keep the whole task control block for simplicity.

Fighting fork bombs

Posted Apr 4, 2011 16:51 UTC (Mon) by sorpigal (subscriber, #36106) [Link]

Isn't "disk/ram/cpu is cheap" typically the argument used to dismiss Unix design decisions based on efficiency?

Fighting fork bombs

Posted Apr 5, 2011 6:29 UTC (Tue) by giraffedata (subscriber, #1954) [Link]

Isn't "disk/ram/cpu is cheap" typically the argument used to dismiss Unix design decisions based on efficiency?

This appears to be a rhetorical question, but I can't tell what the point is.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds