Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Posted May 12, 2015 23:37 UTC (Tue) by wahern (subscriber, #37304)In reply to: Trading off safety and performance in the kernel by zblaxell
Parent article: Trading off safety and performance in the kernel
Posted May 13, 2015 1:46 UTC (Wed)
by zblaxell (subscriber, #26385)
[Link] (7 responses)
Most laptop drives spin by default, and they'll likely continue to do so as long as SSDs insist on pricing over $70/TB.
Posted May 13, 2015 20:25 UTC (Wed)
by zlynx (guest, #2285)
[Link] (6 responses)
I remember when people were claiming $1 per GB was the magic price point. Then it was $0.50 per GB.
As long as spinning hard drives are cheaper per GB there will always be people claiming SSD is too expensive. But at some point it gets to be like claiming your laptop needs a tape drive or a floppy.
Posted May 13, 2015 21:27 UTC (Wed)
by reubenhwk (guest, #75803)
[Link] (4 responses)
Posted May 13, 2015 21:28 UTC (Wed)
by reubenhwk (guest, #75803)
[Link] (3 responses)
Posted May 13, 2015 21:43 UTC (Wed)
by dlang (guest, #313)
[Link] (2 responses)
Posted May 13, 2015 22:01 UTC (Wed)
by zlynx (guest, #2285)
[Link]
Posted May 13, 2015 22:26 UTC (Wed)
by drag (guest, #31333)
[Link]
Sorta, sometimes, and not really.. depending on your use case.
If you actually want to be able to access to your data in any sort of reasonable time frame then throwing the cheapest servers possible stuff full of 3.5 inch 7200rpm drives is far far better option. For money, time, and sanity. :)
Posted May 14, 2015 16:22 UTC (Thu)
by marcH (subscriber, #57642)
[Link]
As long as you can have more storage for the same price, why would you not? Pictures and movies don't need SSD performance at all.
Asking whether SSDs will win over spinning drives is like asking whether L1 caches will win over L2 caches.
Desktop users have solved this problem long ago: they get one small and cheap SSD for the system and applications + one big and cheap HD for pure storage. For laptops SSHDs look interesting. Two drives in a single enclosure.
There is however something entirely different which is killing spinning drives much faster than SSD price points: the cloud. Making the idea of local storage itself obsolete. Laptops with a small and dirty cheap eMMC used mainly as a network cache.
Posted May 13, 2015 13:22 UTC (Wed)
by pbonzini (subscriber, #60935)
[Link] (23 responses)
Posted May 13, 2015 20:28 UTC (Wed)
by zlynx (guest, #2285)
[Link] (22 responses)
Even waking up inside a laptop bag to check email or alert for calendar events is not too bad when the CPU is locked to 800 MHz.
Posted May 14, 2015 12:18 UTC (Thu)
by hmh (subscriber, #3838)
[Link] (21 responses)
The CPU should be idle most of the time during a sync: it is the kind of operation that is supposed to be IO-bound...
Posted May 14, 2015 15:21 UTC (Thu)
by cesarb (subscriber, #6266)
[Link]
Not if you're using dm-crypt without AESNI.
Posted May 14, 2015 18:01 UTC (Thu)
by zlynx (guest, #2285)
[Link] (4 responses)
The simplest way to ensure that overheat doesn't happen is to use the slowest possible CPU speed. But the best way to implement it would be with temperature sensors so that after running 10 seconds at 3 GHz it realizes there's no airflow and slows down.
Sadly, with how ignored and haphazard sensor support is in Linux, I don't think that will work.
If Linux users really cared about sensors, sensor data would show up in desktop tools like Gnome System Monitor, with watts used and CPU temperature shown alongside CPU usage and clock speed. Common hardware would be recognized and their sensors properly labeled instead of "temp1" and "fan2."
Posted May 14, 2015 18:48 UTC (Thu)
by pizza (subscriber, #46)
[Link] (3 responses)
Aside from on-die CPU sensors, there is no such thing as "common hardware" when it comes to sensors.
All Linux can do is provide the raw data and any exposed hooks to control the system. Which it already does. The rest is purely policy, and that's ultimately up to the user, with initial policy configured by the system builder/integrator.
As for displaying sensor data, I've had that data displayed on my desktop for fifteen years or so. And that's all that can be done without customizing the policy for each and every special snowflake of a system.
Posted May 14, 2015 21:29 UTC (Thu)
by zlynx (guest, #2285)
[Link] (2 responses)
Somebody should -- I know that means that I should do it but no time and not enough interest -- should build a CDDB type system for Linux hardware so that for each machine type it is enough for ONE person to label everything in the system. Sensors, audio ports, etc.
Then on system setup the distro could look up all of that stuff.
Posted May 14, 2015 23:34 UTC (Thu)
by dlang (guest, #313)
[Link] (1 responses)
Posted May 15, 2015 4:33 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
At the start it was very incomplete yet it became popular very quickly. Same as many other crowd-sourced services.
Posted May 14, 2015 18:30 UTC (Thu)
by dlang (guest, #313)
[Link] (14 responses)
If you are saving the contents of RAM to disk, then you aren't going to finish any sooner at 5GHz clock than at 500MHz, the limiting factor is going to be your disk I/O performance. So if you can do this at 500MHz rather than 5GHz, you generate significantly less heat.
Also, even in a backpack, there is some heat dissipation going on, so you may not overheat if you are running slowly enough.
Posted May 14, 2015 19:29 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (13 responses)
Really? Doing it at 500MHz means
(a) that the CPU is going to spend more time in C0, and that's going to limit your ability to get into the deeper C states.
I'm not saying that it's impossible, but it's certainly not obvious.
Posted May 14, 2015 19:42 UTC (Thu)
by dlang (guest, #313)
[Link] (8 responses)
As for the CPU spending more time in C0, do you really think that it is going into various sleep states while it is writing data to disk as fast as it can?
Posted May 14, 2015 19:48 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (7 responses)
It's going to increase the amount of time the package has to stay awake to have the memory controller powered.
> do you really think that it is going into various sleep states while it is writing data to disk as fast as it can?
Your argument is that you're not CPU bound. If you're not CPU bound, the CPU is spending time idle. If the CPU is idle, it enters C states.
Posted May 14, 2015 20:01 UTC (Thu)
by dlang (guest, #313)
[Link] (6 responses)
the memory doesn't get powered off between accesses. It can only be powered off once the data has been written to disk. If the limiting factor is the time it takes to write it to disk, slowing down the memory clock is not going to require that it remain powered on longer.
> Your argument is that you're not CPU bound. If you're not CPU bound, the CPU is spending time idle. If the CPU is idle, it enters C states.
Ok, but how deep a C state is it going to be able to go into if it's in the middle of writing to disk as quickly as it can? and do the shallow C states really save that much power over a lower clock speed? race-to-idle really assumes that you can stop powering everything when you hit idle. If the vast majority of the system (and even CPU) is still having to run to be able to respond to interrupts and manage I/O, you aren't really idle yet.
Posted May 14, 2015 20:06 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (5 responses)
I didn't say it did. I said that the memory controller gets powered down between accesses, and the memory goes into self refresh.
> how deep a C state is it going to be able to go into if it's in the middle of writing to disk as quickly as it can?
That's going to depend on a bunch of factors, including I/O latency. There's no single answer.
> and do the shallow C states really save that much power over a lower clock speed?
Yes. Even the most shallow C state will unclock the core, and running at 0MHz is somewhat cheaper than running at 500MHz.
> race-to-idle really assumes that you can stop powering everything when you hit idle.
No it doesn't.
Posted May 14, 2015 23:42 UTC (Thu)
by dlang (guest, #313)
[Link] (4 responses)
> Yes. Even the most shallow C state will unclock the core, and running at 0MHz is somewhat cheaper than running at 500MHz.
remember that switching C states isn't free (in either energy or time), so it may not be a win if you don't stay there very long.
We obviously have very different expectations in how the hardware is going to behave at the different states. But keep in mind that I'm not saying that reducing the clock speed is always the right thing to do, I am just unconvinced that it's never the right thing to do the way that you seem to be.
Posted May 14, 2015 23:56 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link]
Posted May 15, 2015 4:46 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (1 responses)
I had to waste 5 minutes reading the entire thread again to make sure I did not dream and that the exact opposite happened.
Posted May 15, 2015 19:55 UTC (Fri)
by bronson (subscriber, #4806)
[Link]
Posted May 15, 2015 4:51 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link]
> But keep in mind that I'm not saying that reducing the clock speed is always the right thing to do, I am just unconvinced that it's never the right thing to do the way that you seem to be.
From https://lwn.net/Articles/644541/ (written by you)
> If you are saving the contents of RAM to disk, then you aren't going to finish any sooner at 5GHz clock than at 500MHz, the limiting factor is going to be your disk I/O performance. So if you can do this at 500MHz rather than 5GHz, you generate significantly less heat.
From https://lwn.net/Articles/644549/ (written by me)
> I'm not saying that it's impossible, but it's certainly not obvious.
…
Posted May 14, 2015 21:34 UTC (Thu)
by zlynx (guest, #2285)
[Link] (3 responses)
So, while in theory race to idle might be the way to go in practice a laptop that is running user-space while waiting to sync to disk is going to burn itself up at 2.5 GHz.
Posted May 14, 2015 21:42 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
Posted May 14, 2015 23:47 UTC (Thu)
by dlang (guest, #313)
[Link] (1 responses)
for this example, race to idle fails if it takes too long because the system will overheat, while running at a lower speed, even if it takes a lot more time and power will succeed and not damage things.
race-to-idle requires a very specific combination of power/performance at the different states (full speed, partial speed, and idle). That combination has not always been the case and there's no reason to believe that it is going to continue to always be the case. Idle does not always mean that it requires zero power (even for the component that's idled, let alone for the entire system)
Posted May 15, 2015 5:04 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link]
Uh? I'm possibly missing something here, but I don't see any references to that example.
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
SSHD
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
(b) that your memory bus is probably going to be clocked lower, which is going to have a pretty significant impact on the length of time the CPU is going to spend awake
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel
Trading off safety and performance in the kernel