>[T]he problem is likely to be solved by scaling the scheduler's load calculations by a constant value associated with each processor. Processes running on a CPU that is ten times faster than another will accumulate load ten times more quickly.
Faster isn't the goal here. Efficiency is the goal, so a scaling based on the relative efficiency of the 'big' or 'LITTLE' core should fit better with improving battery life.
For example, a core which runs at 600MHz and consumes 0.3 watts with a bogoMIPS of 1 might have a measurement of 200 mega-ops-per-joule and you'd compare it to a core that runs at 1600MHz and consumes 12 watts with a bogoMIPS of 4 (so has 533 1/3 mega-ops per joule). In a race-to-idle scenario, it's obvious where to schedule the work, but in a long-runnning work scenario there may be certain workloads which you wouldn't leave on the LITTLE core because they break into blocks that can win the race-to-idle on the big core. Calculating where that line is will depend on the cost of moving the work between cores, but for a first approximation: how much time do you get on the big core before you've used a second's worth of energy in the LITTLE core? (Assuming it's no cost to start and stop cores, I think that's 1/40 second, so work that fits in intervals of < 1/40 second is unexpectedly better off on the big core.
Note: if these numbers aren't quite right, please do tell. :-)