What's Next? ARM's Cortex A15

Comparing to Qualcomm's APQ8060A gives us a much better idea of how Atom fares in the modern world. Like Intel, Qualcomm appears to prioritize single threaded performance and builds its SoCs on a leading edge LP process. If this were the middle of 2012, the Qualcomm comparison is where we'd stop however this is a new year and there's a new kid in town: ARM's Cortex A15.

We've already looked at Cortex A15 performance and found it to be astounding. While Intel's 5-year old Atom core can still outperform most of the other ARM based designs on the market, the Cortex A15 easily outperforms it. But at what power cost?

To find out, we looked at a Google Nexus 10 featuring a Samsung Exynos 5250 SoC. The 5250 (aka Exynos 5 Dual) features two ARM Cortex A15s running at up to 1.7GHz, coupled with an ARM Mali-T604 GPU. The testing methodology remains identical.

Idle Power

As the Exynos 5250 isn't running Windows RT, we don't need to go through the same song and dance to wait for live tiles to stop animating. The Android home screen is static to begin with, all swings in power consumption have more to do with WiFi at this point:

At idle, the Nexus 10 platform uses more power than any of the other tablets. This shouldn't be too surprising as the display requires much more power, I don't think we can draw any conclusions about the SoC just yet. But just to be sure, let's look at power delivery to the 5250's CPU and GPU blocks themselves:

Ah the wonderful world of power gating. Despite having much more power hungry CPU cores, when they're doing nothing the ARM Cortex A15 looks no different than Atom or even Krait.

Mali-T604 looks excellent here. With virtually nothing happening on the display the GPU doesn't have a lot of work to do to begin with, I believe we're also seeing some of the benefits of Samsung's 32nm LP (HK+MG) process.

Remove WiFi from the equation and things remain fairly similar, total platform power is high thanks to a more power hungry display but at the SoC level idle power consumption is competitive. The GPU power consumption continues to be amazing, although it's possible that Samsung simply doesn't dangle as much off of the GPU power rail as the competitors.

Krait: GPU Power Consumption Cortex A15: SunSpider
Comments Locked

140 Comments

View All Comments

  • A5 - Friday, January 4, 2013 - link

    Even if you just look at the Sunspider (which draws nothing on the screen) power draw, it's pretty clear that the A15 draws more power. There have been a ton of OEMs complaining about A15's power draw, too.
  • madmilk - Friday, January 4, 2013 - link

    Since when did screen resolution matter for CPU power consumption on CPU benchmarks? Platform power might change, yes, but this doesn't invalidate many facts like Cortex-A15 using twice as much power on average compared to Krait, Atom or Cortex-A9.
  • Wolfpup - Friday, January 4, 2013 - link

    Good lord. Do you have some evidence for any of this? If neither Windows nor Android is the "right platform" for ARM, then...are you waiting for Blackberry benchmarks? That's a whole lot of spin you're doing, presumably to fit the data to your preconceived "ARM IS BETTER!" faith.
  • Veteranv2 - Friday, January 4, 2013 - link

    Hahaha, the Nexus 10 has almost 4 times the pixels of the Atom.
    And the conclusion is it draws more power in benchmarks? Of course, those pixels aren't going to fill itself. Way to make conclusion.

    How big was that Intel PR cheque?
  • iwod - Saturday, January 5, 2013 - link

    While i wouldn't say it was a Intel PR, I think they should definitely have left the system level power usage out of the questions. There is no point telling me that a 100" Screen with ARM is using X amount of power compared to 1" Screen with Haswell.

    It is confusing.

    But they did include CPU and GPU benchmarks. So saying it is Intel PR is just trolling.
  • AlB80 - Friday, January 4, 2013 - link

    Architectures with variable length of instruction are doomed. Actually there is only one remains. x86.
    Intel made the step into a happy past when CISC has an advantage over RISC, when superscalarity was just a theory.
    Cortex A57 is coming. ARM cores will easily outperform Atom by effective instruction rate with minimum overhead.
  • Wolfpup - Friday, January 4, 2013 - link

    How is x86 doomed when it has an absolute stranglehold on real PCs, and is now competitive on ultramobile platforms?

    The only disadvantage it holds is the need for a larger decoder on the front end, which has been proportionally shrinking since 1995.
  • djgandy - Friday, January 4, 2013 - link

    plus effing one!

    I think some people heard their uni lecturers say something once in 1999 and just keep repeating it as if it is still true!
  • AlB80 - Friday, January 4, 2013 - link

    Shrinking decoder... nice myth. Of course complicated scheduler and ALU dozen impact on performance, but do not forget how decoded instruction queues are filled. Decoder is only one real difference.
    1. There is fundamental limits how many variable instructions can be decoded per clock. CISC has an instruction cross-interference at the decoder stage. One logical block should determine a total length of decoded instructions.
    2. There is a trick when CISC decoder is disintegrated into 2-3 parts with dedicated inputs, so its looks like a few independent decoders, but each part can not decode any instruction.

    Now compare it with RISC.
    And as I said, what happens when Cortex can decode 4,5,6,7,8 instructions?
  • Kogies - Friday, January 4, 2013 - link

    Don't be so quick to prophesy the death of a' that. What happens when a Cortex decodes 8 instructions... I don't know, it uses 8W?

    Also, didn't Apple choose CISC (Intel) chips over RISC (PowerPC)? Interestingly, I believe Apple made the switch to Intel because the PowerPC chips had too high a power premium for mobile computers.

Log in

Don't have an account? Sign up now