Local storage: 6Gbps+ @ a latency ~1-10ms (even more bandwidth & less latency for the RAM sitting on top of PCI-e solutions).
"Network" RAM: 1Gbps @ a latency ~1-10ms (depending on network topology).
Btw - this is typically called swap. It never really makes any sense to swap remotely. Now if you can come up with a mechanism that let you transport the algorithms seamlessly too (i.e. so you can transfer the code that works with the data to the remote machine as well), then you're on to something, but I think that can only be done on a per-problem basis, so you're back at step 1.
So it doesn't seem like there's a compelling reason to add a bunch of complexity for something that's not a benefit. Now what does make sense is figuring out how to allocate RAM in a way that allows for disabling large parts of it to save on power (i.e. even during execution you can keep most of RAM powered off) for embedded systems & servers.
Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds