User: Password:
|
|
Subscribe / Log in / New account

The case for the /usr merge

The case for the /usr merge

Posted Jan 28, 2012 5:04 UTC (Sat) by jimparis (subscriber, #38647)
In reply to: The case for the /usr merge by rickmoen
Parent article: The case for the /usr merge

> 1. That's only if you're still keeping a separate /boot filesystem, for starters. (Why in 2012, by the way? The 1024-logical-cylinder limit on x86 went away ages ago.)

And the 2TiB limit came up to take its place. I'm glad that separate /boot is still so well supported. I have a system with 3 TB drives, but the BIOS only recognizes and boots from MBR partition tables. I need a /boot partition entirely contained within the first 2 TiB, and then a GPT containing the actual partitions for Linux.


(Log in to post comments)

The case for the /usr merge

Posted Jan 29, 2012 13:51 UTC (Sun) by mastro (guest, #72665) [Link]

This sounds like a bootloader limitation, AFAIK the BIOS just reads and executes the first sector if it has a two-byte magic signature, even if it's not a MBR.

The case for the /usr merge

Posted Jan 29, 2012 18:08 UTC (Sun) by jimparis (subscriber, #38647) [Link]

In a way, yeah, it is a bootloader limitation -- GRUB is still using BIOS to read sectors off the disk, so it can only ask for sectors within the first 2 TiB. GRUB does have an (incomplete) ATA driver that can work in some cases, and that driver support could be improved, but there's no way you're going to fit all those drivers into the first 512 byte sector, so there will necessarily still be BIOS calls to read more sectors from the disk. And since those sectors need to be below 2 TiB, you're limited to either playing tricks (like cramming your drivers into empty alignment space before your real partitions) or something a bit cleaner (like giving it a real /boot partition, in which case you can keep using the BIOS calls for loading the kernel and initrd too).

The case for the /usr merge

Posted Jan 29, 2012 20:33 UTC (Sun) by sbergman27 (guest, #10767) [Link]

We take these things as being "normal". But sometimes newbies say the darndest things. A while back, I had one ask me the question (paraphrased) "We know from long experience that hardware capabilities are increasing geometrically. And we have a good idea what that geometric rate is. Why don't they just build computers and software accordingly?". And I've got to admit, he had a point.

I've already had enough of dealing with silly "limits" to last a lifetime. Current rate of change doesn't matter. It's the first derivative that does. The second has always remained remarkably close to 0. :-)

-Steve

The case for the /usr merge

Posted Jan 29, 2012 23:03 UTC (Sun) by rgmoore (✭ supporter ✭, #75) [Link]

I suspect that the problem is not with software authors not understanding the speed of hardware development, but with them underestimating the lifespan of their projects. They assume that their project will have a limited lifespan, so they take shortcuts that will make their work easier but cause problems in a predictable but seemingly long time. What they are forgetting is that sensible people don't replace working infrastructure just for fun, so their design is very likely to remain in use until one of those arbitrary limits is hit.

The case for the /usr merge

Posted Jan 30, 2012 0:01 UTC (Mon) by sbergman27 (guest, #10767) [Link]

"I suspect that the problem is not with software authors not understanding the speed of hardware development, but with them underestimating the lifespan of their projects."

Of course, it's also to IHV's advantage for their hardware to become obsolete in a few years. After all, how were they to know that hard drives would increase in size so fast?

Today, they are happy to sell you a new system that is 100x more powerful, which will do for you... about as much as your old system did.

A permanent state of hyperinflation which people are so used to that they hardly notice that they are on a treadmill. Fundamentally, there is less reason to replace their computers than to replace their cars.

Computers generally don't wear out. Except for hard drives. And if the new drive won't work with your old machine, planned obsolescence wins again.

And the "ka-ching!" sound of the cash register rings again at Dell, or HP, or Acer.

-Steve

The case for the /usr merge

Posted Jan 30, 2012 3:15 UTC (Mon) by raven667 (subscriber, #5198) [Link]

One thing that does make a difference is "fuel efficiency", the resource savings on a new computer paired with consolidation often more than pays for the cost of new hardware. The funny thing is that the benefits of fuel efficiency in cars is often overrated, the cost of a new car or the difference in price between a Prius and a 20mpg car is not going to be made up by fuel savings over the average lifetime of the car

The case for the /usr merge

Posted Jan 30, 2012 8:17 UTC (Mon) by sbergman27 (guest, #10767) [Link]

The fuel savings between a Prius and a 20mpg car in the US works out to about $9000 over 150,000 miles. I'm not sure what to put in for a car's "lifetime". Mine have 160,000 - 380,000 miles on them, currently. And range from 24 to 45 years old. I don't think that's typical. The one with the 380,000 miles on it is my 1988 Chevy Sprint Metro. (Really a rebadged Suzuki.) It beats the Prius, with an original 55mpg city/60mpg highway EPA rating. The 2008 adjusted numbers being 44mpg city/51mpg highway. (In which case it only beats the Prius on the highway.)

The Sprint has saved me over $30,000 in fuel cost over its lifetime, compared to the 20mpg car you suggest for comparison. Not including the avoided cost of buying new cars to replace it. (Suzuki reliability was amazing.)

The "advantage" to throwing away old computers and replacing them with new, more fuel efficient ones has always seemed a bit iffy to me. I support the practice. But I'm not sure it makes economic sence based solely upon electricity savings.

On an absolute scale, looking at fossil fuel usage kilogram for kilogram, more efficient cars are clearly more important than more efficient computers.

The case for the /usr merge

Posted Jan 30, 2012 14:43 UTC (Mon) by raven667 (subscriber, #5198) [Link]

I would put 100k-150k for a lifetime of a car, most people don't hold on to them even that long. In the Prius case that's not even including the cost of replacing the battery packs which I would expect to wear out in that time frame. So at best you have 10k savings, probably less, which means that keeping an older car and putting more miles on it (taking it past 150k like your cars) is probably more energy efficient than building/buying a new one every couple of years just to take advantage of a 5-15mpg difference.

This is different than computers as they are both getting more energy efficient _and_ more capable leading to consolidation in addition to the per CPU core power savings so it's more like a 20:1 efficiency improvement, actually 40:1 because you usually spend as much in cooling as in power usage. A 20 yr old car only has minor capability differences with a modern, high efficiency car.

The case for the /usr merge

Posted Jan 30, 2012 18:28 UTC (Mon) by dlang (subscriber, #313) [Link]

I have seen capacity growing at the rates that you are talking about, but not energy efficiency.

do you have pointers to the efficency claims for capacity/power savings growing that significantly?

the other issue is that electricity is pretty cheap, so it takes a LOT of power savings to equal the cost of a new server.

The case for the /usr merge

Posted Jan 30, 2012 20:12 UTC (Mon) by raven667 (subscriber, #5198) [Link]

The efficiency is that you can run an 8 or 12 core machine with 64-128GB RAM using the same power as a 2 core machine with 4-8G RAM from 5 years ago. Power needs per CPU core are dropping. That trend coupled with virtualization allows you to get more out of you new hardware purchase, consolidating 20:1 on servers and 40:1 on power to run the same workload, and giving plenty of breathing room for your facilities for growth. Having a datacenter run out of power/cooling is very expensive.

The case for the /usr merge

Posted Jan 30, 2012 20:30 UTC (Mon) by dlang (subscriber, #313) [Link]

consolidating 20:1 on servers is fantastic, my company is going heavily into vitualization and is only seeing about 3:1 so far (with the target being 6:1)

also, this sort of savings from virtualization assumes that you are running your prior servers lightly loaded. If you have an application that is large enough that you need to run it on multiple machines to start with, virtualization is a net loss (although this net loss is frequently covered by by doing the consolidation at the same time as a server upgrade)

I don't know about your servers, but on the ones I am seeing 8-12 cores with 64-128G of ram takes significantly more power than the 2-core servers with 4-8G of ram that I still have in production. measurements show somewhere between 2x and 3x

The case for the /usr merge

Posted Jan 30, 2012 21:06 UTC (Mon) by sbergman27 (guest, #10767) [Link]

This reminds me of that great TV commercial IBM did several years ago. "They've stolen the servers! They've stolen the servers!"

The Heist: http://www.youtube.com/watch?v=T-NpLu2xC38

-Steve

The case for the /usr merge

Posted Jan 30, 2012 20:10 UTC (Mon) by sbergman27 (guest, #10767) [Link]

"I would put 100k-150k for a lifetime of a car, most people don't hold on to them even that long."

That's quite a different thing than the lifetime of the car. It gets sold to someone else. Pick up an Autotrader mini-mag sometime. Basically, a classifieds specializing in used cars. Not classics, necessarily. Just used cars. 200,000+ miles is not in the least uncommon.

And I. too, would be interested in your data supporting your claim of such amazing efficiency improvements in computers.

Also, while I have your ear, and in reference to another thread, I would be interested in your explanation as to why the Linux i/o schedulers would not sort the read/write requests of a random access benchmark to provide *far* better performance than the 6ms per request then you seem to agree with Dave about. Even the noop scheduler does elevator sorting of requests. For that matter, so does the drive's internal cache.

If you do not understand one or more of those terms, let me know and I will explain them to you.

The case for the /usr merge

Posted Jan 30, 2012 21:08 UTC (Mon) by raven667 (subscriber, #5198) [Link]

I did some googling and came up with some representative links. The take-away is that each new generation of machines for the last 5 years or so has relatively the same or slightly higher power consumption but we went from dual single-core to dual dual-core to dual quad-core and dual hex-core without doubling, quadrupling, octupling power usage. All that power not generating heat also takes a load off cooling which is more power savings.

http://www.networkjack.info/blog/2007/02/13/power-consump...
http://www.utahsysadmin.com/2007/04/19/power-requirements...

http://www.dell.com/us/dfb/p/poweredge-1950/pd
http://www.dell.com/us/business/p/poweredge-r610/pd

As far as IO schedulers, elevators help but are no panacea. I think the estimate of 175 IOPS on a 7.2k RPM drive is about right. A 15k RPM drive may get you close to 250 IOPS but that's the limit of spinning rust. An average seek time of 6ms doesn't seem out of whack, it actually sounds pretty good. With perfect elevators if the data isn't immediately adjacent then there is going to be some number of milliseconds of track to track seek time for every IO. The longer IO is delayed so that it can be sorted the more latency is added onto all the requests. In any event a random IO test is going to be the worst possible case for an elevator algorithm.

For example here are the results of a naive model where one has 65535 tracks and a linear track to track seek cost. The first table is random and the second has been sorted by an elevator. In practice with disks there is always an elevator in the drive, in the drive controller, in the OS so you will never see the first access pattern and more sorting isn't going to make the second pattern any better.

More info on actual drive characteristics for better modeling

http://en.wikipedia.org/wiki/Disk-drive_performance_chara...

request address seek
1 62640 #VALUE!
2 34681 27959
3 21062 13619
4 39674 18612
5 46138 6464
6 42942 3196
7 3227 39715
8 25600 22373
9 62505 36905
10 18344 44161

Total 213004

request address seek
7 3227 #VALUE!
10 18344 15117
3 21062 2718
8 25600 4538
2 34681 9081
4 39674 4993
6 42942 3268
5 46138 3196
9 62505 16367
1 62640 135

Total 59413

The case for the /usr merge

Posted Jan 30, 2012 21:16 UTC (Mon) by sbergman27 (guest, #10767) [Link]

Let's move this back to the XFS thread.

The case for the /usr merge

Posted Jan 30, 2012 1:44 UTC (Mon) by slashdot (guest, #22014) [Link]

<<
The current 48-bit LBA scheme, introduced in 2003 with ATA-6 standard, allows addressing up to 128 PiB. Current PC-Compatible computers support INT 13H Extensions, which use 64-bit structures for LBA addressing and should encompass any future extension of LBA addressing, though modern operating systems implement direct disk access and do not use the BIOS subsystems, except at boot load time. However, the common DOS style Master boot record partition table only supports disk partitions up to 2 TiB in size. For large partitions this needs to be replaced by another scheme for instance the GUID Partition Table which has the same 64-bit limit as the current INT 13H Extensions.
>>

The limit is in the MBR format and/or BIOS implementation.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds