Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE
Posted Feb 17, 2012 16:40 UTC (Fri) by BenHutchings (subscriber, #37955)In reply to: Linux support for ARM big.LITTLE by dlang
Parent article: Linux support for ARM big.LITTLE
On Intel and AMD x86 CPUs we already have cases where turning off some cores will let you run other cores at a higher clock speed (thermal/current limitations), and where you can run some cores at lower speeds than others.
The Linux scheduler already generically supports the grouping of CPU threads that share resources, and tends to spread runnable tasks out across groups (although it can also be configured to concentrate them in order to save power). I think that this should result in enabling the 'turbo' mode where possible.
Posted Feb 17, 2012 19:05 UTC (Fri)
by dlang (guest, #313)
[Link] (9 responses)
Part of the decision process will also need to be to consider what programs are running, and how likely are these programs to need significantly more CPU than they are currently using (because switching modes takes a significant amount of time), this involves a lot of policy, and a lot of useage profiling, exactly the types of things that do not belong in the kernel.
With the exception of the case where there is a single thread using all of a core, I think the existing kernel scheduler will 'just work' on a system with different speed cores.
Where I expect the current scheduler to have problems is in the case where a single thread will max out a CPU, I don't think that the scheduler will be able to realize that it would max one CPU, but not max another one
Posted Feb 17, 2012 19:24 UTC (Fri)
by BenHutchings (subscriber, #37955)
[Link] (8 responses)
Posted Feb 17, 2012 19:38 UTC (Fri)
by dlang (guest, #313)
[Link] (7 responses)
I view this as a three tier system
At the first tier you have the scheduler on each core making decisions about what process that is assigned to that core should run next
At the second tier you have a periodic rebalancing algorithm that considers moving jobs from one core to another. preferably run by a core that has idle time (that core 'pulls' work from other cores)
These two tiers will handle cores of different speeds without a problem as-is, as long as no thread maxes out the slowest core.
I am saying that the third tier would be the userspace power management daemon, which operates completely asynchronously to the kernel, it watches the overall system and makes decisions on when to change the CPU configuration. When it decides to do so, it would send a message to the kernel to make the change (change the speed of any core, including power it off or on)
until the userspace issues the order to make the change, the kernel scheduler just works with what it has, no interaction with userspace needed.
Posted Feb 17, 2012 20:26 UTC (Fri)
by BenHutchings (subscriber, #37955)
[Link] (6 responses)
Posted Feb 17, 2012 22:34 UTC (Fri)
by dlang (guest, #313)
[Link] (5 responses)
doing the analysis every second (or even less frequently) should be pretty good in most cases
the kernel can do some fairly trivial choices, but they are limited to something along the lines of
here is a list of power modes, if you think you are being idle too much, move down the list, if you think you are not being idle enough move up the list
but anything more complicated than this will quickly get out of control
for example,
for the sake of argument, say that 'turbo mode' is defined as:
turn off half your cores and run the other half 50% faster, using the same amount of power. (loosing 25% of it's processing power, probably more due to memory pipeline stalls)
how would the kernel ever decide when it's appropriate to sacrifice so much of it's processing power for no power savings?
I could say that I would want to do so if a single thread is using 100% of a cpu in a non-turbo mode.
but what if making that switch would result in all the 'turbo mode' cores being maxed out? it still may be faster to run overloaded for a short time to finish the cpuhog task faster.
I don't see any way that this sort of logic can possibly belong in the kernel. And it's also stuff that's not very timing sensitive (if delaying a second to make the decision results in the process finishing first, it was probably not that important a decision to make, for example)
Posted Feb 17, 2012 23:25 UTC (Fri)
by khim (subscriber, #9252)
[Link] (4 responses)
Why do you think so? What you are talking about? It looks like this whole discussion comes from different universe. Perhaps it's the well-discussed phenomenon where an important requirement that was not at all obvious to one party is so obvious to other one that they didn't think to state it. We are discussing all that in the context of big.LITTLE processing, right? Which is used by things like tablets and mobile phones, right? Well, the big question here is: do I need to unfreeze and use hot and powerful Cortex-A15 core to perform some kind of UI task or will slim and cool Cortex-A7 be enough to finish it? And the cut-off is dictated by physiology: the task should be finished in less then 70-100ms if it's reaction to user input or in 16ms if it's part of the animation. This means that decision to wake up Cortex-A15 or not must be taken in 1-2ms, tops. Better to do it in about 300-500µs. Any solution which alters power config once per second is so, so, SO beyond the event horison it's not even funny. Wrong criteria. If Cortex-A7 core can calculate the next frame in 10ms then there are no need to wake up Cortex-A15 core even if for these 10ms Cortex-A7 is 100% busy. The problems here are numerous and indeed quite time-critical. The only model which makes sense is in-kernel demon which actually does the work quickly and efficiently - but it uses information collected by userspace daemon.
Posted Feb 18, 2012 0:47 UTC (Sat)
by dlang (guest, #313)
[Link] (3 responses)
waking from some sleep modes may take 10ms, so if you have deadlines like that you better not put the processor to sleep in the first place.
I also think that a delay at the start of an app is forgivable, so if the system needs the faster cores to render things, it should find out quickly, start up those cores, and continue.
I agree that if you can specify an ordered list of configurations and hand that to the kernel you may be able to have the kernel use that.
on the other hand, the example that you give:
> Wrong criteria. If Cortex-A7 core can calculate the next frame in 10ms then there are no need to wake up Cortex-A15 core even if for these 10ms Cortex-A7 is 100% busy.
sort of proves my point. how can the kernel know that the application completed it's work if it's busy 100% of the time? (especially if you have an algorithm that will adapt to not having quite enough processor and will auto-scale it's quality)
this sort of thing requires knowledge that the kernel does not have.
Also, the example of the 'turbo mode' where you can run some cores faster, but at the expense of not having the thermal headroom to run as many cores. In every case I am aware of, 'turbo mode' actually reduces the clock cycles available overall (and makes the cpu:memory speed ratio worse, costing more performance), but if you have a single threaded process that will finish faster in turbo mode, it may be the right thing to do to switch to that mode.
it doesn't matter if you are a 12 core Intel x86 monster, or a much smaller ARM chip.
Posted Feb 18, 2012 11:09 UTC (Sat)
by khim (subscriber, #9252)
[Link] (2 responses)
Well, sure. But differences between interactive tasks and batch processing modes are acute. With batch processing you are optimizing time for the [relatively] long process. With interactive tasks you optimize work in your tiny 16ms timeslice. It makes no sense to produce result in 5ms (and pay for it) but if you spent 20ms then you are screwed. Today the difference is not so acute because the most power-hungry part of the smartphone or tablet is LCD/OLED display. But if/when technologies like Mirasol will be adopted these decisions will suddenly start producing huge differences in the battery life.
Posted Feb 18, 2012 11:58 UTC (Sat)
by dlang (guest, #313)
[Link] (1 responses)
i don't think we are ever going to get away from having to make the choice between keeping things powered up to be able to be responsive, and powering things down aggressively to save power.
Posted Feb 20, 2012 12:17 UTC (Mon)
by khim (subscriber, #9252)
[Link]
Hardware is in labs already (and should reach the market in a few years), it's time to think about software side. If we are talking about small tweaks then such hardware it not yet on the radar, but if we plan to create the whole new subsystem (task which will itself need two or three years to mature) then it must be considered.
Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE
I don't see what's so performance critical about this, you shouldn't be making significant power config changes to your system hundreds of times a second,
And it's also stuff that's not very timing sensitive (if delaying a second to make the decision results in the process finishing first, it was probably not that important a decision to make, for example)
I could say that I would want to do so if a single thread is using 100% of a cpu in a non-turbo mode.
Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE
it doesn't matter if you are a 12 core Intel x86 monster, or a much smaller ARM chip.
Linux support for ARM big.LITTLE
Linux support for ARM big.LITTLE