User: Password:
Subscribe / Log in / New account

The case for the /usr merge

The case for the /usr merge

Posted Jan 30, 2012 20:10 UTC (Mon) by sbergman27 (guest, #10767)
In reply to: The case for the /usr merge by raven667
Parent article: The case for the /usr merge

"I would put 100k-150k for a lifetime of a car, most people don't hold on to them even that long."

That's quite a different thing than the lifetime of the car. It gets sold to someone else. Pick up an Autotrader mini-mag sometime. Basically, a classifieds specializing in used cars. Not classics, necessarily. Just used cars. 200,000+ miles is not in the least uncommon.

And I. too, would be interested in your data supporting your claim of such amazing efficiency improvements in computers.

Also, while I have your ear, and in reference to another thread, I would be interested in your explanation as to why the Linux i/o schedulers would not sort the read/write requests of a random access benchmark to provide *far* better performance than the 6ms per request then you seem to agree with Dave about. Even the noop scheduler does elevator sorting of requests. For that matter, so does the drive's internal cache.

If you do not understand one or more of those terms, let me know and I will explain them to you.

(Log in to post comments)

The case for the /usr merge

Posted Jan 30, 2012 21:08 UTC (Mon) by raven667 (subscriber, #5198) [Link]

I did some googling and came up with some representative links. The take-away is that each new generation of machines for the last 5 years or so has relatively the same or slightly higher power consumption but we went from dual single-core to dual dual-core to dual quad-core and dual hex-core without doubling, quadrupling, octupling power usage. All that power not generating heat also takes a load off cooling which is more power savings.

As far as IO schedulers, elevators help but are no panacea. I think the estimate of 175 IOPS on a 7.2k RPM drive is about right. A 15k RPM drive may get you close to 250 IOPS but that's the limit of spinning rust. An average seek time of 6ms doesn't seem out of whack, it actually sounds pretty good. With perfect elevators if the data isn't immediately adjacent then there is going to be some number of milliseconds of track to track seek time for every IO. The longer IO is delayed so that it can be sorted the more latency is added onto all the requests. In any event a random IO test is going to be the worst possible case for an elevator algorithm.

For example here are the results of a naive model where one has 65535 tracks and a linear track to track seek cost. The first table is random and the second has been sorted by an elevator. In practice with disks there is always an elevator in the drive, in the drive controller, in the OS so you will never see the first access pattern and more sorting isn't going to make the second pattern any better.

More info on actual drive characteristics for better modeling

request address seek
1 62640 #VALUE!
2 34681 27959
3 21062 13619
4 39674 18612
5 46138 6464
6 42942 3196
7 3227 39715
8 25600 22373
9 62505 36905
10 18344 44161

Total 213004

request address seek
7 3227 #VALUE!
10 18344 15117
3 21062 2718
8 25600 4538
2 34681 9081
4 39674 4993
6 42942 3268
5 46138 3196
9 62505 16367
1 62640 135

Total 59413

The case for the /usr merge

Posted Jan 30, 2012 21:16 UTC (Mon) by sbergman27 (guest, #10767) [Link]

Let's move this back to the XFS thread.

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds