Interestingly, djb's paper talks about maintaining security expectations even in the face of
having to run untrusted, random codebases as part of secure application.
The basic idea is compartmentalization: for each component (especially those from a third
party) you should clearly define the input, the output, and set up access and resources
restrictions in which the component must operate. Finally, after it does its job, you
shouldn't trust it but do some validation to check the sanity of the result.
For instance, if the purpose of the component were to extract recipient address of the email,
then the component can only read the email, produce one string as response, not access
anything outside that email and have to run in limited time and memory. Once something comes
out, it must look like an email, for instance match the famous rfc822 pattern.
To achieve this, one might have to run untrusted components under a virtual machine and/or use
the operating system's primitives to constraint cpu, memory, available system calls, etc. I'm
not sure how well Linux can do these things, but the basic idea is that it should be possible
to run even completely random code safely provided that these relatively simple constraints
are worked out first.