Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
Wait, what? They make s390 notebooks?
Notes from the Montreal Linux Power Management Mini-Summit
Posted Aug 3, 2009 21:32 UTC (Mon) by elanthis (guest, #6227)
My desktop does suspend/hibernate, for example, so I can leave my machine powered off over night while still having my full session restored when I start it up in the morning.
For big power-hungry workstations, power saving is even more critical. The power consumed by workstations and servers is one of the biggest (if not the biggest) expensive a large ISP/IT Department has to deal with.
Posted Aug 3, 2009 22:13 UTC (Mon) by drag (subscriber, #31333)
Remember that typically with mainframe applications they charge money based on MIPS cycles. So that the more processor you have the more everything costs. So in a efficiently running mainframe environment with proper setup and accounting you should be running at about 100% cpu 24/7 in order to get the best value.
They are not like PCs were you have the user or I/O as a bottleneck and the CPU spends most of it's time idle... Mainframes tend to have massive amount of I/O and relatively little CPU.
I would still like to have suspend-to-disk capabilities in a mainframe environment however. For various hardware issues and whatnot you do need to plan for downtime occasionally. By being able to suspend the Linux systems to disk then that reduces the downtime. Instead of needlessly wasting CPU time booting up and initializing the system you just load up the memory snapshot, which should be almost always much faster in a system like that.
Posted Aug 3, 2009 22:43 UTC (Mon) by ewan (subscriber, #5533)
Posted Aug 4, 2009 16:09 UTC (Tue) by ewan (subscriber, #5533)
OT: Biggest expense
Posted Aug 4, 2009 14:21 UTC (Tue) by man_ls (subscriber, #15091)
Posted Aug 4, 2009 20:41 UTC (Tue) by dlang (✭ supporter ✭, #313)
even allowing for 2x power consumption (to cover cooling, etc) servers on a 3 or so year replacement cycle would still cost more than the power they consume over that time (assuming max power draw the entire time)
power is a significant cost, and since it shows up as a single line item it jumps out at people, but it's still not as bad as people are making it out to be.
Posted Aug 4, 2009 22:09 UTC (Tue) by man_ls (subscriber, #15091)
Similarly, if each server costs 3k$, the breaking point is with a lifecycle of just ~3 years. I would say that either machines cost more or use less juice, so servers should be above power too.
Posted Aug 4, 2009 22:30 UTC (Tue) by dlang (✭ supporter ✭, #313)
if you have any serious uses you have at least two people (probably 3) so that you have someone available all the time (with vacations, sick time, etc). a _lot_ of places which meet this criteria have fewer than the 220-330 servers that would be needed to maintain that ratio.
this ratio is also very dependent on how many different variations of server configurations that you have. google gets such phenomenal numbers of servers per admin by the fact that they have _lots_ of any one configuration. if they only had a couple thousand servers per configuration they would need far more admins than they do ;-) they also don't have their admins deal with failures, they just shut down the failed systems.
In many ways I would rather have another 50 servers to manage that fit in one of my existing baselines than to add 1 special exception box that is completely different.
Posted Aug 13, 2009 1:41 UTC (Thu) by deleteme (guest, #49633)
One baseline is good but not acheivable.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds