I'm working with genomic data and some of our workloads are "spiky". They usually tend to require 2-3GB of RAM and can easily fit on Amazon EC2 "large" nodes. But sometimes they might require more than 10Gb of RAM which requires quite more powerful (and expensive!) nodes. You really start appreciating RAM usage when you're paying for it by hour (I think I now understand early mainframe programmers).
I've cobbled together a system which uses checkpointing to move workloads across nodes in case of danger of swapping. It works, but I'd really like ability to 'borrow' extra RAM for short periods.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds