Fighting fork bombs
Fighting fork bombs
Posted Mar 31, 2011 14:54 UTC (Thu) by Seegras (guest, #20463)In reply to: Fighting fork bombs by cesarb
Parent article: Fighting fork bombs
You will still know which process spawned what "inetd", even if the parent is long gone from memory or even disk.
Definitly worth some consideration.
Posted Mar 31, 2011 21:04 UTC (Thu)
by dafid_b (guest, #67424)
[Link] (7 responses)
Background
There are a couple of use-cases I think the above tool could help with
2)
Posted Apr 3, 2011 1:58 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (6 responses)
I have long been frustrated by the Unix concept of orphan processes, for all the reasons mentioned here.
If I were redesigning Unix, I would just say that a process cannot exit as long as it has children, and there would be two forms of exit(): kill all my children and exit, and exit as soon as my children are all gone. And when a signal kills a process, it kills all its children as well.
Furthermore, rlimits would be extended to cover all of a process' descendants as well, and be refreshable over time. Goodbye, fork bomb.
There are probably applications somewhere that create a neverending chain of forks, but I don't know how important that is.
Posted Apr 3, 2011 2:52 UTC (Sun)
by vonbrand (subscriber, #4458)
[Link] (5 responses)
Keeping processes around just because some descendent is still running is a waste of resources.
Posted Apr 3, 2011 19:06 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (2 responses)
Seems like a pretty good return on investment for me. Maybe 50 cents worth of memory (system-wide) to be able to avoid system failures due to runaway resource usage and always be able to know where processes came from. It's about the same tradeoff as keeping a process around just because its parent hasn't yet looked at its termination status, which Unix has always done.
A process that no longer has to execute shouldn't use an appreciable amount of resource.
Posted Apr 7, 2011 9:24 UTC (Thu)
by renox (guest, #23785)
[Link] (1 responses)
Posted Apr 7, 2011 15:16 UTC (Thu)
by giraffedata (guest, #1954)
[Link]
I don't think "whole process" implies the program memory and I agree - if I were implementing this, I would have exit() free all the resources the process holds that aren't needed after the program is done running, as Linux does for zombie processes today. But like existing zombies, I would probably keep the whole task control block for simplicity.
Posted Apr 4, 2011 16:51 UTC (Mon)
by sorpigal (guest, #36106)
[Link] (1 responses)
Posted Apr 5, 2011 6:29 UTC (Tue)
by giraffedata (guest, #1954)
[Link]
This appears to be a rhetorical question, but I can't tell what the point is.
Fighting fork bombs
Hold in this tree the reason the process was created...
eg
"login-shell" (init hard code)
"Firefox Web Browser" (menu entry text)
"print-spooler"
"Chrome - BBC News Home" (Window title)
I find myself uneasy when evaluating the safety of my system - the process list of 140 odd processes with perhaps 10 recognised, leaves me no wiser..
1)
Should I use the browser to transfer cash between bank accounts?
Or should I reboot first?
How can I become more confident of code running on my system?
Was that web-site really benign?
I allowed the site to run scripts in order to see content more clearly...
Has it created a process to execute in the background after I closed the frame?
Fighting fork bombs
Fighting fork bombs
Fighting fork bombs
Keeping processes around just because some descendent is still running is a waste of resources.
Fighting fork bombs
Fighting fork bombs
you're [suggesting] keeping the whole process until its children exits which can be expensive, maybe a middleground could be more useful ie keep only the 'identity' of the parent process and free the rest.
Fighting fork bombs
Fighting fork bombs
Isn't "disk/ram/cpu is cheap" typically the argument used to dismiss Unix design decisions based on efficiency?