If file descriptors were safe by default (close-on-exec), then I agree, let glibc scatter FDs everywhere. In my case, though, it's a bit more difficult. I'm writing a test harness that starts as root, does some non-trivial processing (including a lot of forking), then eventually drops perms and execs a potentially hostile, user-supplied executable. (by hostile, I mean the way any executable in ~/bin on a multi-user system is potentially hostile)
Well, I definitely I run through the entire FD space to make sure ALL FDs that I don't know about are closed. If glibc opens an FD to some sensitive resource while I'm running as root, and that FD remains open when I drop perms and exec, that user's executable gets free reign over some potentially sensitive system resource.
I'll admit that I haven't thought about this too deeply (it's just a one-off hack)... Is there any better solution than running through and closing anything I don't know about? I've found the occasional lurking fd (a file leaked earlier in the process, a forgotten syslog, etc) so my solution, while damned ugly, has probably saved me once or twice.
Why oh why can't FDs be safe by default?
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds