You will never be able to perfectly restore the environment a process was in before check-pointing (at least not reliably). The article mentioned files getting deleted, which is just a special case of resources disappearing. It seams to me that there should be a golden mean between trying to recreate the processes environment and applications learning to deal with certain things changing underneath them. In particular with things which are not guaranteed not to change (network connections can get broken, files can get truncated or modified by other processes while a process is working with them) it would probably make sense to see how much breakage applications can tolerate now and where they have trouble to consider whether fixing applications is feasible instead of making the check-point code trickier. It will probably be a while yet before this code is mainstream, and there is still time for that.