SunSpider 0.9.1

The results get more interesting when we look at power consumption during active workloads. We'll start off with SunSpider, a mid-length JavaScript benchmark that we frequently use in our reviews:

At the platform level, Qualcomm's APQ8060 powered Dell XPS 10 falls in between Surface RT and Acer's W510. Active power looks very similar to the Intel powered W510, but performance is appreciably slower so total energy consumed is higher.

Looking at the CPU, the situation changes a bit. Intel's peak power consumption is similar to Tegra 3, while Krait manages to come in appreciably lower. I suspect that missing the L2 cache power island here is lowering Qualcomm's power consumption by 100 - 200mW but overall CPU-only power consumption would still be lower. Once again, at idle Krait seems to have a bit of an advantage as well.

The situation changes once we look at GPU power consumption, with Intel/Imagination having the clear advantage here.

JavaScript Performance - SunSpider 0.9.1

Kraken

Mozilla's Kraken benchmark is a new addition to our js performance suite, and it's a beast. The test runs for much longer than SunSpider, but largely tells a similar story:

 

At the platform level, Acer's W510 has slightly higher peak power consumption compared to the Dell XPS 10 but it also completes the test quicker, giving it a better overall energy usage profile.

Looking at the CPU cores themselves, Qualcomm holds onto its lead here although once again, I suspect the margin of victory is exaggerated by the fact that we're not taking into account L2 power consumption for Qualcomm. Intel does deliver better performance, which allows the CPU to race to sleep quicker than on APQ8060A.

The comparison to Tegra 3 is not surprising, this is exactly what we've seen play out in our battery life tests as well.

JavaScript Performance - Mozilla Kraken Benchmark

RIABench

RIABench's Focus Tests are on the other end of the spectrum, and take a matter of seconds to complete. What we get in turn is a more granular look at power consumption:

 

Here the W510 consumes more power at the platform level, but drops to a lower idle state than the XPS 10. Surface RT clearly uses more power than both.

Krait's CPU level (excluding L2 cache) power consumption is once again lower than Atom's, but Atom completes the task quicker. In this case total energy usage is still in Qualcomm's favor. The fact that there's a discrepancy between CPU specific power results and the total platform results are partly due to the missing L2 cache power consumption data from the CPU power chart for Qualcomm, and partly due to differences in the tablets themselves.

JavaScript Performance - RIABench Focus Tests

Krait: Idle Power Krait: WebXPRT & TouchXPRT 2013
Comments Locked

140 Comments

View All Comments

  • A5 - Friday, January 4, 2013 - link

    Even if you just look at the Sunspider (which draws nothing on the screen) power draw, it's pretty clear that the A15 draws more power. There have been a ton of OEMs complaining about A15's power draw, too.
  • madmilk - Friday, January 4, 2013 - link

    Since when did screen resolution matter for CPU power consumption on CPU benchmarks? Platform power might change, yes, but this doesn't invalidate many facts like Cortex-A15 using twice as much power on average compared to Krait, Atom or Cortex-A9.
  • Wolfpup - Friday, January 4, 2013 - link

    Good lord. Do you have some evidence for any of this? If neither Windows nor Android is the "right platform" for ARM, then...are you waiting for Blackberry benchmarks? That's a whole lot of spin you're doing, presumably to fit the data to your preconceived "ARM IS BETTER!" faith.
  • Veteranv2 - Friday, January 4, 2013 - link

    Hahaha, the Nexus 10 has almost 4 times the pixels of the Atom.
    And the conclusion is it draws more power in benchmarks? Of course, those pixels aren't going to fill itself. Way to make conclusion.

    How big was that Intel PR cheque?
  • iwod - Saturday, January 5, 2013 - link

    While i wouldn't say it was a Intel PR, I think they should definitely have left the system level power usage out of the questions. There is no point telling me that a 100" Screen with ARM is using X amount of power compared to 1" Screen with Haswell.

    It is confusing.

    But they did include CPU and GPU benchmarks. So saying it is Intel PR is just trolling.
  • AlB80 - Friday, January 4, 2013 - link

    Architectures with variable length of instruction are doomed. Actually there is only one remains. x86.
    Intel made the step into a happy past when CISC has an advantage over RISC, when superscalarity was just a theory.
    Cortex A57 is coming. ARM cores will easily outperform Atom by effective instruction rate with minimum overhead.
  • Wolfpup - Friday, January 4, 2013 - link

    How is x86 doomed when it has an absolute stranglehold on real PCs, and is now competitive on ultramobile platforms?

    The only disadvantage it holds is the need for a larger decoder on the front end, which has been proportionally shrinking since 1995.
  • djgandy - Friday, January 4, 2013 - link

    plus effing one!

    I think some people heard their uni lecturers say something once in 1999 and just keep repeating it as if it is still true!
  • AlB80 - Friday, January 4, 2013 - link

    Shrinking decoder... nice myth. Of course complicated scheduler and ALU dozen impact on performance, but do not forget how decoded instruction queues are filled. Decoder is only one real difference.
    1. There is fundamental limits how many variable instructions can be decoded per clock. CISC has an instruction cross-interference at the decoder stage. One logical block should determine a total length of decoded instructions.
    2. There is a trick when CISC decoder is disintegrated into 2-3 parts with dedicated inputs, so its looks like a few independent decoders, but each part can not decode any instruction.

    Now compare it with RISC.
    And as I said, what happens when Cortex can decode 4,5,6,7,8 instructions?
  • Kogies - Friday, January 4, 2013 - link

    Don't be so quick to prophesy the death of a' that. What happens when a Cortex decodes 8 instructions... I don't know, it uses 8W?

    Also, didn't Apple choose CISC (Intel) chips over RISC (PowerPC)? Interestingly, I believe Apple made the switch to Intel because the PowerPC chips had too high a power premium for mobile computers.

Log in

Don't have an account? Sign up now