You dont need to checkpoint every minute. It would already saturate the filesystem on current clusters. But you can do that every now and then, and if your filesystem cannot handle storing all the RAM of your compute nodes in a small time, it means it is already not up to the task.
I expect an exascale computer to come with an exascale filesystem. If the CR is correctly implemented, it means exabytes of data being written to it simultaneously, with no other bottleneck than a barrier to halt all processes in a consistent state. In the end its the same amount of data that the computation would write when it ends to store the result. If the cluster cannot handle that, its just an entry in the Top500 dick size contest, not something to do serious work.
The network speed should definitely not be a problem. It should be able to transfer all your node memorys contents in less than a second. Otherwise it will just not be able to run computations that exchange a lot of data.