|
|
Subscribe / Log in / New account

Trading off safety and performance in the kernel

Trading off safety and performance in the kernel

Posted May 12, 2015 23:37 UTC (Tue) by wahern (subscriber, #37304)
In reply to: Trading off safety and performance in the kernel by zblaxell
Parent article: Trading off safety and performance in the kernel

Speaking of anachronisms, how common are spinning disks in laptops these days?


to post comments

Trading off safety and performance in the kernel

Posted May 13, 2015 1:46 UTC (Wed) by zblaxell (subscriber, #26385) [Link] (7 responses)

> Speaking of anachronisms, how common are spinning disks in laptops these days?

Most laptop drives spin by default, and they'll likely continue to do so as long as SSDs insist on pricing over $70/TB.

Trading off safety and performance in the kernel

Posted May 13, 2015 20:25 UTC (Wed) by zlynx (guest, #2285) [Link] (6 responses)

This price per gigabyte making SSDs too expensive idea just keeps moving the goalposts.

I remember when people were claiming $1 per GB was the magic price point. Then it was $0.50 per GB.

As long as spinning hard drives are cheaper per GB there will always be people claiming SSD is too expensive. But at some point it gets to be like claiming your laptop needs a tape drive or a floppy.

Trading off safety and performance in the kernel

Posted May 13, 2015 21:27 UTC (Wed) by reubenhwk (guest, #75803) [Link] (4 responses)

Please still using tape drives should be recycled along with their crappy electronics.

Trading off safety and performance in the kernel

Posted May 13, 2015 21:28 UTC (Wed) by reubenhwk (guest, #75803) [Link] (3 responses)

Please -> *People*

Trading off safety and performance in the kernel

Posted May 13, 2015 21:43 UTC (Wed) by dlang (guest, #313) [Link] (2 responses)

tape drives still have a better cost profile when dealing with very large data volumes

Trading off safety and performance in the kernel

Posted May 13, 2015 22:01 UTC (Wed) by zlynx (guest, #2285) [Link]

And yet you still don't want one in your laptop, which is why I compared spinning hard disks to tape drives. A spinning disk is rapidly approaching a tape drive in speed and general usefulness.

Trading off safety and performance in the kernel

Posted May 13, 2015 22:26 UTC (Wed) by drag (guest, #31333) [Link]

> tape drives still have a better cost profile when dealing with very large data volumes

Sorta, sometimes, and not really.. depending on your use case.

If you actually want to be able to access to your data in any sort of reasonable time frame then throwing the cheapest servers possible stuff full of 3.5 inch 7200rpm drives is far far better option. For money, time, and sanity. :)

SSHD

Posted May 14, 2015 16:22 UTC (Thu) by marcH (subscriber, #57642) [Link]

> As long as spinning hard drives are cheaper per GB there will always be people claiming SSD is too expensive. But at some point it gets to be like claiming your laptop needs a tape drive or a floppy.

As long as you can have more storage for the same price, why would you not? Pictures and movies don't need SSD performance at all.

Asking whether SSDs will win over spinning drives is like asking whether L1 caches will win over L2 caches.

Desktop users have solved this problem long ago: they get one small and cheap SSD for the system and applications + one big and cheap HD for pure storage. For laptops SSHDs look interesting. Two drives in a single enclosure.

There is however something entirely different which is killing spinning drives much faster than SSD price points: the cloud. Making the idea of local storage itself obsolete. Laptops with a small and dirty cheap eMMC used mainly as a network cache.

Trading off safety and performance in the kernel

Posted May 13, 2015 13:22 UTC (Wed) by pbonzini (subscriber, #60935) [Link] (23 responses)

CPUs still produce a lot of heat if not idle though.

Trading off safety and performance in the kernel

Posted May 13, 2015 20:28 UTC (Wed) by zlynx (guest, #2285) [Link] (22 responses)

Yeah. If designers were thinking of the full consumer experience they'd put a "force CPU to slowest clock" as the first thing in the suspend path. Then it could take as long as it needed to flush buffes and hibernate, or whatever.

Even waking up inside a laptop bag to check email or alert for calendar events is not too bad when the CPU is locked to 800 MHz.

Trading off safety and performance in the kernel

Posted May 14, 2015 12:18 UTC (Thu) by hmh (subscriber, #3838) [Link] (21 responses)

Race to idle at top clock speeds very often generates less heat than staying active for a longer period due to a slower clock.

The CPU should be idle most of the time during a sync: it is the kind of operation that is supposed to be IO-bound...

Trading off safety and performance in the kernel

Posted May 14, 2015 15:21 UTC (Thu) by cesarb (subscriber, #6266) [Link]

> The CPU should be idle most of the time during a sync: it is the kind of operation that is supposed to be IO-bound...

Not if you're using dm-crypt without AESNI.

Trading off safety and performance in the kernel

Posted May 14, 2015 18:01 UTC (Thu) by zlynx (guest, #2285) [Link] (4 responses)

Race to idle while in a bag with no heat dissipation could work, but only if the device includes temperature in its speed calculations. Most laptops don't. Not until the CPU has reached an excessively high temp, which also starts to cook its surrounding components like the battery and GPU.

The simplest way to ensure that overheat doesn't happen is to use the slowest possible CPU speed. But the best way to implement it would be with temperature sensors so that after running 10 seconds at 3 GHz it realizes there's no airflow and slows down.

Sadly, with how ignored and haphazard sensor support is in Linux, I don't think that will work.

If Linux users really cared about sensors, sensor data would show up in desktop tools like Gnome System Monitor, with watts used and CPU temperature shown alongside CPU usage and clock speed. Common hardware would be recognized and their sensors properly labeled instead of "temp1" and "fan2."

Trading off safety and performance in the kernel

Posted May 14, 2015 18:48 UTC (Thu) by pizza (subscriber, #46) [Link] (3 responses)

> Common hardware would be recognized and their sensors properly labeled instead of "temp1" and "fan2."

Aside from on-die CPU sensors, there is no such thing as "common hardware" when it comes to sensors.

All Linux can do is provide the raw data and any exposed hooks to control the system. Which it already does. The rest is purely policy, and that's ultimately up to the user, with initial policy configured by the system builder/integrator.

As for displaying sensor data, I've had that data displayed on my desktop for fifteen years or so. And that's all that can be done without customizing the policy for each and every special snowflake of a system.

Trading off safety and performance in the kernel

Posted May 14, 2015 21:29 UTC (Thu) by zlynx (guest, #2285) [Link] (2 responses)

> Aside from on-die CPU sensors, there is no such thing as "common hardware" when it comes to sensors.

Somebody should -- I know that means that I should do it but no time and not enough interest -- should build a CDDB type system for Linux hardware so that for each machine type it is enough for ONE person to label everything in the system. Sensors, audio ports, etc.

Then on system setup the distro could look up all of that stuff.

Trading off safety and performance in the kernel

Posted May 14, 2015 23:34 UTC (Thu) by dlang (guest, #313) [Link] (1 responses)

good luck. How are you going to reliably identify what system you are running on? With some vendors even the exact model number isn't enough because they change the internals without changing the model number

Trading off safety and performance in the kernel

Posted May 15, 2015 4:33 UTC (Fri) by marcH (subscriber, #57642) [Link]

CDDB is not 100% reliable but it's useful and popular anyway.

At the start it was very incomplete yet it became popular very quickly. Same as many other crowd-sourced services.

Trading off safety and performance in the kernel

Posted May 14, 2015 18:30 UTC (Thu) by dlang (guest, #313) [Link] (14 responses)

race to idle assumes that it's cpu processing that determines when you can get to idle.

If you are saving the contents of RAM to disk, then you aren't going to finish any sooner at 5GHz clock than at 500MHz, the limiting factor is going to be your disk I/O performance. So if you can do this at 500MHz rather than 5GHz, you generate significantly less heat.

Also, even in a backpack, there is some heat dissipation going on, so you may not overheat if you are running slowly enough.

Trading off safety and performance in the kernel

Posted May 14, 2015 19:29 UTC (Thu) by mjg59 (subscriber, #23239) [Link] (13 responses)

> So if you can do this at 500MHz rather than 5GHz, you generate significantly less heat.

Really? Doing it at 500MHz means

(a) that the CPU is going to spend more time in C0, and that's going to limit your ability to get into the deeper C states.
(b) that your memory bus is probably going to be clocked lower, which is going to have a pretty significant impact on the length of time the CPU is going to spend awake

I'm not saying that it's impossible, but it's certainly not obvious.

Trading off safety and performance in the kernel

Posted May 14, 2015 19:42 UTC (Thu) by dlang (guest, #313) [Link] (8 responses)

If you are writing memory to disk, clocking the memory slower isn't going to slow down how quickly the disk can write data.

As for the CPU spending more time in C0, do you really think that it is going into various sleep states while it is writing data to disk as fast as it can?

Trading off safety and performance in the kernel

Posted May 14, 2015 19:48 UTC (Thu) by mjg59 (subscriber, #23239) [Link] (7 responses)

> If you are writing memory to disk, clocking the memory slower isn't going to slow down how quickly the disk can write data.

It's going to increase the amount of time the package has to stay awake to have the memory controller powered.

> do you really think that it is going into various sleep states while it is writing data to disk as fast as it can?

Your argument is that you're not CPU bound. If you're not CPU bound, the CPU is spending time idle. If the CPU is idle, it enters C states.

Trading off safety and performance in the kernel

Posted May 14, 2015 20:01 UTC (Thu) by dlang (guest, #313) [Link] (6 responses)

> It's going to increase the amount of time the package has to stay awake to have the memory controller powered.

the memory doesn't get powered off between accesses. It can only be powered off once the data has been written to disk. If the limiting factor is the time it takes to write it to disk, slowing down the memory clock is not going to require that it remain powered on longer.

> Your argument is that you're not CPU bound. If you're not CPU bound, the CPU is spending time idle. If the CPU is idle, it enters C states.

Ok, but how deep a C state is it going to be able to go into if it's in the middle of writing to disk as quickly as it can? and do the shallow C states really save that much power over a lower clock speed? race-to-idle really assumes that you can stop powering everything when you hit idle. If the vast majority of the system (and even CPU) is still having to run to be able to respond to interrupts and manage I/O, you aren't really idle yet.

Trading off safety and performance in the kernel

Posted May 14, 2015 20:06 UTC (Thu) by mjg59 (subscriber, #23239) [Link] (5 responses)

> the memory doesn't get powered off between accesses

I didn't say it did. I said that the memory controller gets powered down between accesses, and the memory goes into self refresh.

> how deep a C state is it going to be able to go into if it's in the middle of writing to disk as quickly as it can?

That's going to depend on a bunch of factors, including I/O latency. There's no single answer.

> and do the shallow C states really save that much power over a lower clock speed?

Yes. Even the most shallow C state will unclock the core, and running at 0MHz is somewhat cheaper than running at 500MHz.

> race-to-idle really assumes that you can stop powering everything when you hit idle.

No it doesn't.

Trading off safety and performance in the kernel

Posted May 14, 2015 23:42 UTC (Thu) by dlang (guest, #313) [Link] (4 responses)

>> and do the shallow C states really save that much power over a lower clock speed?

> Yes. Even the most shallow C state will unclock the core, and running at 0MHz is somewhat cheaper than running at 500MHz.

remember that switching C states isn't free (in either energy or time), so it may not be a win if you don't stay there very long.

We obviously have very different expectations in how the hardware is going to behave at the different states. But keep in mind that I'm not saying that reducing the clock speed is always the right thing to do, I am just unconvinced that it's never the right thing to do the way that you seem to be.

Trading off safety and performance in the kernel

Posted May 14, 2015 23:56 UTC (Thu) by mjg59 (subscriber, #23239) [Link]

Shallow C states are basically free on modern CPUs. Deeper ones will drop cache, but that's basically irrelevant in the case we're discussing.

Trading off safety and performance in the kernel

Posted May 15, 2015 4:46 UTC (Fri) by marcH (subscriber, #57642) [Link] (1 responses)

> But keep in mind that I'm not saying that [X] is always the right thing to do, I am just unconvinced that it's never the right thing to do the way that you seem to be.

I had to waste 5 minutes reading the entire thread again to make sure I did not dream and that the exact opposite happened.

Trading off safety and performance in the kernel

Posted May 15, 2015 19:55 UTC (Fri) by bronson (subscriber, #4806) [Link]

Me too. That was surreal.

Trading off safety and performance in the kernel

Posted May 15, 2015 4:51 UTC (Fri) by mjg59 (subscriber, #23239) [Link]

Oh, right. Yes.

> But keep in mind that I'm not saying that reducing the clock speed is always the right thing to do, I am just unconvinced that it's never the right thing to do the way that you seem to be.

From https://lwn.net/Articles/644541/ (written by you)

> If you are saving the contents of RAM to disk, then you aren't going to finish any sooner at 5GHz clock than at 500MHz, the limiting factor is going to be your disk I/O performance. So if you can do this at 500MHz rather than 5GHz, you generate significantly less heat.

From https://lwn.net/Articles/644549/ (written by me)

> I'm not saying that it's impossible, but it's certainly not obvious.


Trading off safety and performance in the kernel

Posted May 14, 2015 21:34 UTC (Thu) by zlynx (guest, #2285) [Link] (3 responses)

I have actual, although anecdotal, data that a laptop allowed to clock to 2.5 GHz will overheat in a bag while a laptop locked to the lowest speed, 800 MHz in my case, will not overheat. It can sit in that bag until the battery runs down, happily processing things.

So, while in theory race to idle might be the way to go in practice a laptop that is running user-space while waiting to sync to disk is going to burn itself up at 2.5 GHz.

Trading off safety and performance in the kernel

Posted May 14, 2015 21:42 UTC (Thu) by mjg59 (subscriber, #23239) [Link] (2 responses)

If the system is in any kind of state where it has an effectively unbounded amount of work to perform then the situation changes pretty significantly. There are various cases where apps behave badly when they lose network connectivity and spin trying to reconnect, for instance.

Trading off safety and performance in the kernel

Posted May 14, 2015 23:47 UTC (Thu) by dlang (guest, #313) [Link] (1 responses)

That's not the issue. The example given is that a machine running at max speed for 10 min will overheat while one running for several hours at the low speed will not.

for this example, race to idle fails if it takes too long because the system will overheat, while running at a lower speed, even if it takes a lot more time and power will succeed and not damage things.

race-to-idle requires a very specific combination of power/performance at the different states (full speed, partial speed, and idle). That combination has not always been the case and there's no reason to believe that it is going to continue to always be the case. Idle does not always mean that it requires zero power (even for the component that's idled, let alone for the entire system)

Trading off safety and performance in the kernel

Posted May 15, 2015 5:04 UTC (Fri) by mjg59 (subscriber, #23239) [Link]

> The example given is that a machine running at max speed for 10 min will overheat while one running for several hours at the low speed will not.

Uh? I'm possibly missing something here, but I don't see any references to that example.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds