Not logged in
Log in now
Create an account
Subscribe to LWN
Dividing the Linux desktop
LWN.net Weekly Edition for June 13, 2013
A report from pgCon 2013
Little things that matter in language design
LWN.net Weekly Edition for June 6, 2013
The future calculus of memory management
Posted Jan 19, 2012 14:32 UTC (Thu) by felixfix (subscriber, #242)
Posted Jan 19, 2012 15:44 UTC (Thu) by cesarb (subscriber, #6266)
It is not just the price of the memory stick. It also uses more power. You might need a more expensive server with more memory slots. It is one more piece of hardware which can fail and need to be replaced.
On the software side, the operating system has to use more memory to track all the available pages, and on 32-bit architectures this memory might have to be in the very valuable low memory area (below the 4G limit). Your memory management algorithms might also need more time to manage all these pages.
Posted Jan 19, 2012 16:53 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
I'm working with genomic data and some of our workloads are "spiky". They usually tend to require 2-3GB of RAM and can easily fit on Amazon EC2 "large" nodes. But sometimes they might require more than 10Gb of RAM which requires quite more powerful (and expensive!) nodes. You really start appreciating RAM usage when you're paying for it by hour (I think I now understand early mainframe programmers).
I've cobbled together a system which uses checkpointing to move workloads across nodes in case of danger of swapping. It works, but I'd really like ability to 'borrow' extra RAM for short periods.
Posted Jan 19, 2012 18:01 UTC (Thu) by dgm (subscriber, #49227)
There are many old machines out there that do not support more than 2 or 4 GB. Many of those machines are in production environments, and are not going to be replaced any time soon for multiple reasons, including the cost of retesting, downtime, and current buying policies.
Posted Jan 20, 2012 8:05 UTC (Fri) by ekj (guest, #1524)
Which means it's worth it to spend a lot of programming-hours, even for just a modest decrease in this. A single percentage reduction in memory-consumption, is worth a million dollars.
Lots of stuff is worth it at large scale, even if at small scale it doesn't matter. If you're a small company owning a dozen servers, then in 99% of the cases it'll be cheaper to just throw more ram at them, than to spend programmer-time reducing the memory-footprint of the applications.
If you've got a million servers, the math looks different.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds