Not logged in
Log in now
Create an account
Subscribe to LWN
Pencil, Pencil, and Pencil
Dividing the Linux desktop
LWN.net Weekly Edition for June 13, 2013
A report from pgCon 2013
Little things that matter in language design
The case for the /usr merge
Posted Jan 29, 2012 18:08 UTC (Sun) by jimparis (subscriber, #38647)
Posted Jan 29, 2012 20:33 UTC (Sun) by sbergman27 (subscriber, #10767)
I've already had enough of dealing with silly "limits" to last a lifetime. Current rate of change doesn't matter. It's the first derivative that does. The second has always remained remarkably close to 0. :-)
Posted Jan 29, 2012 23:03 UTC (Sun) by rgmoore (subscriber, #75)
I suspect that the problem is not with software authors not understanding the speed of hardware development, but with them underestimating the lifespan of their projects. They assume that their project will have a limited lifespan, so they take shortcuts that will make their work easier but cause problems in a predictable but seemingly long time. What they are forgetting is that sensible people don't replace working infrastructure just for fun, so their design is very likely to remain in use until one of those arbitrary limits is hit.
Posted Jan 30, 2012 0:01 UTC (Mon) by sbergman27 (subscriber, #10767)
Of course, it's also to IHV's advantage for their hardware to become obsolete in a few years. After all, how were they to know that hard drives would increase in size so fast?
Today, they are happy to sell you a new system that is 100x more powerful, which will do for you... about as much as your old system did.
A permanent state of hyperinflation which people are so used to that they hardly notice that they are on a treadmill. Fundamentally, there is less reason to replace their computers than to replace their cars.
Computers generally don't wear out. Except for hard drives. And if the new drive won't work with your old machine, planned obsolescence wins again.
And the "ka-ching!" sound of the cash register rings again at Dell, or HP, or Acer.
Posted Jan 30, 2012 3:15 UTC (Mon) by raven667 (subscriber, #5198)
Posted Jan 30, 2012 8:17 UTC (Mon) by sbergman27 (subscriber, #10767)
The Sprint has saved me over $30,000 in fuel cost over its lifetime, compared to the 20mpg car you suggest for comparison. Not including the avoided cost of buying new cars to replace it. (Suzuki reliability was amazing.)
The "advantage" to throwing away old computers and replacing them with new, more fuel efficient ones has always seemed a bit iffy to me. I support the practice. But I'm not sure it makes economic sence based solely upon electricity savings.
On an absolute scale, looking at fossil fuel usage kilogram for kilogram, more efficient cars are clearly more important than more efficient computers.
Posted Jan 30, 2012 14:43 UTC (Mon) by raven667 (subscriber, #5198)
This is different than computers as they are both getting more energy efficient _and_ more capable leading to consolidation in addition to the per CPU core power savings so it's more like a 20:1 efficiency improvement, actually 40:1 because you usually spend as much in cooling as in power usage. A 20 yr old car only has minor capability differences with a modern, high efficiency car.
Posted Jan 30, 2012 18:28 UTC (Mon) by dlang (✭ supporter ✭, #313)
do you have pointers to the efficency claims for capacity/power savings growing that significantly?
the other issue is that electricity is pretty cheap, so it takes a LOT of power savings to equal the cost of a new server.
Posted Jan 30, 2012 20:12 UTC (Mon) by raven667 (subscriber, #5198)
Posted Jan 30, 2012 20:30 UTC (Mon) by dlang (✭ supporter ✭, #313)
also, this sort of savings from virtualization assumes that you are running your prior servers lightly loaded. If you have an application that is large enough that you need to run it on multiple machines to start with, virtualization is a net loss (although this net loss is frequently covered by by doing the consolidation at the same time as a server upgrade)
I don't know about your servers, but on the ones I am seeing 8-12 cores with 64-128G of ram takes significantly more power than the 2-core servers with 4-8G of ram that I still have in production. measurements show somewhere between 2x and 3x
Posted Jan 30, 2012 21:06 UTC (Mon) by sbergman27 (subscriber, #10767)
The Heist: http://www.youtube.com/watch?v=T-NpLu2xC38
Posted Jan 30, 2012 20:10 UTC (Mon) by sbergman27 (subscriber, #10767)
That's quite a different thing than the lifetime of the car. It gets sold to someone else. Pick up an Autotrader mini-mag sometime. Basically, a classifieds specializing in used cars. Not classics, necessarily. Just used cars. 200,000+ miles is not in the least uncommon.
And I. too, would be interested in your data supporting your claim of such amazing efficiency improvements in computers.
Also, while I have your ear, and in reference to another thread, I would be interested in your explanation as to why the Linux i/o schedulers would not sort the read/write requests of a random access benchmark to provide *far* better performance than the 6ms per request then you seem to agree with Dave about. Even the noop scheduler does elevator sorting of requests. For that matter, so does the drive's internal cache.
If you do not understand one or more of those terms, let me know and I will explain them to you.
Posted Jan 30, 2012 21:08 UTC (Mon) by raven667 (subscriber, #5198)
As far as IO schedulers, elevators help but are no panacea. I think the estimate of 175 IOPS on a 7.2k RPM drive is about right. A 15k RPM drive may get you close to 250 IOPS but that's the limit of spinning rust. An average seek time of 6ms doesn't seem out of whack, it actually sounds pretty good. With perfect elevators if the data isn't immediately adjacent then there is going to be some number of milliseconds of track to track seek time for every IO. The longer IO is delayed so that it can be sorted the more latency is added onto all the requests. In any event a random IO test is going to be the worst possible case for an elevator algorithm.
For example here are the results of a naive model where one has 65535 tracks and a linear track to track seek cost. The first table is random and the second has been sorted by an elevator. In practice with disks there is always an elevator in the drive, in the drive controller, in the OS so you will never see the first access pattern and more sorting isn't going to make the second pattern any better.
More info on actual drive characteristics for better modeling
request address seek
1 62640 #VALUE!
2 34681 27959
3 21062 13619
4 39674 18612
5 46138 6464
6 42942 3196
7 3227 39715
8 25600 22373
9 62505 36905
10 18344 44161
request address seek
7 3227 #VALUE!
10 18344 15117
3 21062 2718
8 25600 4538
2 34681 9081
4 39674 4993
6 42942 3268
5 46138 3196
9 62505 16367
1 62640 135
Posted Jan 30, 2012 21:16 UTC (Mon) by sbergman27 (subscriber, #10767)
Posted Jan 30, 2012 1:44 UTC (Mon) by slashdot (guest, #22014)
The limit is in the MBR format and/or BIOS implementation.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds