Power Measurements using Trepn

Measuring power draw is an interesting unique capability of Qualcomm's MDPs. Using their Trepn Profiler software and measurement hardware integrated into the MDP, we can measure a number of different power rails on the device, including power draw from each CPU core, the digital core (including video decoder and modem) and a bunch of other measures.

Measuring and keeping track of how different SoCs consumer power is something we've wanted to do for a while, and at least under the Qualcomm MDP umbrella at this point it's possible to measure right on the device.

The original goal was to compare power draw on 45nm MSM8660 versus 28nm MSM8960, however we encountered stability issues with Trepn profiler on the older platform that are still being resolved. Thankfully it is possible to take measures on MSM8960, and for this we turned to a very CPU intensive task that would last long enough to get a good measure, and also load both cores so we can see how things behave. That test is the Moonbat Benchmark, which is a web-worker wraper of the sunspider 0.9 test suite. We fired up a test consisting of 4 workers and 50 runs inside Chrome beta (which is web-worker enabled), and profiled using Trepn.

If you squint at the graph, you can see that one Krait core can use around 750 mW at maximum load. I didn't enable the CPU frequency graph (just to keep things simple above) but is 750 mW number happens right at 1.5 GHz. The green spikes from battery power are when we're drawing more than the available current from USB - this is also why you see devices sometimes discharge even when plugged in. There's an idle period at the end that I also left visible - you can see how quickly Qualcomm's governor suspends the second core completely after our moonbat test finishes running.

Here's another run of moonbat on Chrome Beta where we can see the same behavior, but zoomed in a bit better - each Krait core will consume anywhere between 450 mW and 750 mW depending on the workload, which does change during our run while V8 does its JIT compilation and Chrome dispach things to each CPU.

The next big question is obviously - well how much does GPU contribute to power drain? The red "Digital Core Rail Power" lines above include the Adreno 225 GPU, video decode, and "modem digital" blocks. Cellular is disabled on the MDP MSM8960, and we're not decoding any video, so in the right circumstances we can somewhat isolate out the GPU. To find out, I profiled a run of GLBenchmark Egypt on High settings (which is an entirely GPU compute bound test) and let it run to completion. You can see how the digital rail bounces between 800 mW and 1.2 W while the test is running. Egypt's CPU portions are pretty much single-threaded as well, as shown by the yellow and green lines above.

Another interesting case is what this looks like when browsing the web. I fired up the analyzer and loaded the AnandTech homepage followed by an article, and scrolled the page in the trace above. Chrome and "Browser" on Android now use the GPU for composition and rendering the page, and you can see the red line in the plot spike up when I'm actively panning and translating around on the page. In addition, the second CPU core only really wakes up when either loading the page and parsing HTML.

One thing we unfortunately can't measure is how much power having the baseband lit up on each different air interface (CDMA2000 1x, EV-DO, WCDMA, LTE, etc.) consumes, as the MDP MSM8960 we were sampled doesn't have cellular connectivity enabled. This is something that we understand in theory (at least for the respective WCDMA and LTE radio resource states), but remains to be empirically explored. It's unfortunate that we also can't compare to the MDP MSM8660 quite yet, but that might become possible pretty quickly.

GPU Performance - Adreno 225 Final Thoughts
Comments Locked

86 Comments

View All Comments

  • ssj4Gogeta - Tuesday, February 21, 2012 - link

    I think he meant lower compared to the 720p GLBenchmark where the A5 wins.
  • zanon - Tuesday, February 21, 2012 - link

    I agree the wording is a bit awkward there since they are both driving identical numbers of pixels. If he meant to compare it to the earlier 720p results it'd probably be better to make that explicit.
  • jjj - Tuesday, February 21, 2012 - link

    Looks like it's faster than Tegra 3 and with single threaded perf certainly much better the only remaining big question is power consumption.
  • Malih - Tuesday, February 21, 2012 - link

    I've been my old android device that comes with Android 1.6, and Cyanogenmod-ded to Gingerbread (it's not so responsive when running more than one app), because I need the new version of the Gmail app.
  • Malih - Tuesday, February 21, 2012 - link

    correction: I've been *using* my old...

    In short: it looks like I'll be waiting in line for a smartphone with this SoC
  • Zingam - Tuesday, February 21, 2012 - link

    I haven't been impressed by a CPU/GPU for years but this thing looks amazing! If they manage to go on like that we'll soon have a true ARM desktop experience.

    Great job! I wish now they support the latest DirectX/OpenGL/OpenCL/OpenVG etc. stuff and we'll have it!!! It is unimaginable what ARM based SoCs would deliver when the time for 14nm comes.
  • Torrijos - Tuesday, February 21, 2012 - link

    Since both devices actually render the same amount of pixel but with different aspect ratio, would it be possible, that the performance hit seen for the iPhone 4S, is the result of graphics rendered in a standard aspect ratio (16:9 or something else) then having to be transformed to fit the particular screen?
  • cosminmcm - Tuesday, February 21, 2012 - link

    Maybe it's because at the lower resolution the faster CPU on the Krait (newer architecture with higher clocks) matters more than the faster GPU on the A5. When the resolution grows, the difference between the GPU becomes more apparent.
  • LetsGo - Tuesday, February 21, 2012 - link

    What difference?

    http://blogs.unity3d.com/wp-content/uploads/2011/0...
  • metafor - Tuesday, February 21, 2012 - link

    Considering Apple controls the entire software stack and the A5 silicon, it'd be pretty stupid of them to do that. And if you look at how performance scales between the iPad (4:3) and iPhone (16:9), there's no slowdown due to aspect ratio.

Log in

Don't have an account? Sign up now