Final Words

Whereas I didn't really have anything new to conclude in the original article (Atom Z2760 is faster and more power efficient than Tegra 3), there's a lot to talk about here. We already know that Atom is faster than Krait, but from a power standpoint the two SoCs are extremely competitive. At the platform level Intel (at least in the Acer W510) generally leads in power efficiency. Note that this advantage could just as easily be due to display and other power advantages in the W510 itself and not necessarily indicative of an SoC advantage.

Looking at the CPU cores themselves, Qualcomm takes the lead. It's unclear how things would change if we could include L2 cache power consumption for Qualcomm as we do for Intel (see page 2 for an explanation). I suspect that Qualcomm does maintain the power advantage here though, even with the L2 cache included.

On the GPU side, Intel/Imagination win there although the roles reverse as Adreno 225 holds a performance advantage. For modern UI performance, the PowerVR SGX 545 is good enough but Adreno 225 is clearly the faster 3D GPU. Intel has underspecced its ultra mobile GPUs for a while, so a lot of the power advantage is due to the lower performing GPU. In 2D/modern UI tests however, the performance advantage isn't realized and thus the power advantage is still valid.

Qualcomm is able to generally push to lower idle power levels, indicating that even Intel's 32nm SoC process is getting a little long in the tooth. TSMC's 28nm LP and Samsung's 32nm LP processes both help silicon built in those fabs drive down to insanely low idle power levels. That being said, it is still surprising to me that a 5-year-old Atom architecture paired with a low power version of a 3-year-old process technology can be this competitive. In the next 9 - 12 months we'll finally get an updated, out-of-order Atom core built on a brand new 22nm low power/SoC process from Intel. This is one area where we should see real improvement. Intel's chances to do well in this space are good if it can manage to execute well and get its parts into designs people care about.


Device level power consumption, from our iPhone 5 review, look familiar?

If the previous article was about busting the x86 power myth, one key takeaway here is that Intel's low power SoC designs are headed in the right direction. Atom's power curve looks a lot like Qualcomm's, and I suspect a lot like Apple's. There are performance/power tradeoffs that all three make, but they're all being designed the way they should.

The Cortex A15 data is honestly the most intriguing. I'm not sure how the first A15 based smartphone SoCs will compare to Exynos 5 Dual in terms of power consumption, but at least based on the data here it looks like Cortex A15 is really in a league of its own when it comes to power consumption. Depending on the task that may not be an issue, but you still need a chassis that's capable of dissipating 1 - 4x the power of a present day smartphone SoC made by Qualcomm or Intel. Obviously for tablets the Cortex A15 can work just fine, but I am curious to see what will happen in a smartphone form factor. With lower voltage/clocks and a well architected turbo mode it may be possible to deliver reasonable battery life, but simply tossing the Exynos 5 Dual from the Nexus 10 into a smartphone isn't going to work well. It's very obvious to me why ARM proposed big.LITTLE with Cortex A15 and why Apple designed Swift.

I'd always heard about Haswell as the solution to the ARM problem, particularly in reference to the Cortex A15. The data here, particularly on the previous page, helped me understand exactly what that meant. Under a CPU or GPU heavy workload, the Exynos 5 Dual will draw around 4W. Peak TDP however is closer to 8W. If you remember back to IDF, Intel specifically called out 8W as a potential design target for Haswell. In reality, I expect that we'll see Haswell parts even lower power than that. While it may still be a stretch to bring Haswell down to 4W, it's very clear to me that Intel sees this as a possiblity in the near term. Perhaps not at 22nm, but definitely at 14nm. We already know Core can hit below 8W at 22nm, if it can get down to around 4W then that opens up a whole new class of form factors to a traditionally high-end architecture.

Ultimately I feel like that's how all of this is going to play out. Intel's Core architectures will likely service the 4W and above space, while Atom will take care of everything else below it. The really crazy part is that it's not too absurd to think about being able to get a Core based SoC into a large smartphone as early as 14nm, and definitely by 10nm (~2017) should the need arise. We've often talked about smartphones being used as mainstream computing devices in the future, but this is how we're going to get there. By the time Intel moves to 10nm ultramobile SoCs, you'll be able to get somewhere around Sandy/Ivy Bridge class performance in a phone.

At the end of the day, I'd say that Intel's chances for long term success in the tablet space are pretty good - at least architecturally. Intel still needs a Nexus, iPad or other similarly important design win, but it should have the right technology to get there by 2014. It's up to Paul or his replacement to ensure that everything works on the business side.

As far as smartphones go, the problem is a lot more complicated. Intel needs a good high-end baseband strategy which, as of late, the Infineon acquisition hasn't been able to produce. I've heard promising things in this regard but the baseband side of Intel remains embarassingly quiet. This is an area where Qualcomm is really the undisputed leader, Intel has a lot of work ahead of it here. As for the rest of the smartphone SoC, Intel is on the right track. Its existing architecture remains performance and power competitive with the best Qualcomm has to offer today. Both Intel and Qualcomm have architecture updates planned in the not too distant future (with Qualcomm out of the gate first), so this will be one interesting battle to watch. If ARM is the new AMD, then Krait is the new Athlon 64. The difference is, this time, Intel isn't shipping a Pentium 4.

Determining the TDP of Exynos 5 Dual
Comments Locked

140 Comments

View All Comments

  • powerarmour - Friday, January 4, 2013 - link

    So yes, finally confirming what anyone with half a brain knows, competitive ARM SoC's use less power.
  • apinkel - Friday, January 4, 2013 - link

    I'm assuming you are kidding.

    Atom is roughly equivalent to (dual core) Krait in power draw but has better performance.

    The A15 is faster than either krait or the atom but it's power draw is too much to make it usable in a smartphone (which is I'm assuming why qualcomm had to redesign the A15 architecture for krait to make it fit into the smartphone power envelope).

    The battle I still want to see is quad core krait and atom.
  • ImSpartacus - Friday, January 4, 2013 - link

    Let me make sure I have this straight. Did Qualcomm redesign A15 to create Krait?
  • djgandy - Friday, January 4, 2013 - link

    No. Qualcomm create their own designs from scratch. They have an Instruction Set licence for ARM but they are arm "clones"
  • apinkel - Friday, January 4, 2013 - link

    Sorry, yeah, I could have worded that better.

    But in any case the comment now has me wondering if I'm off base in my understanding of how Qualcomm does what it does...

    I've been under the impression that Qualcomm took the ARM design and tweaked it for their needs (instead of just licensing the instruction set and the full chip design top to bottom). Yeah/Nay?
  • fabarati - Friday, January 4, 2013 - link

    Nay.

    They do what AMD does, they license the instruction set and create their own cpus that are compatible with the ARM ISA's (in Krait's case, the ARMv7). That's also what Apple did with their Swift cores.

    Nvidia tweaked the Cortex A9 in the Tegra 2, but it was still a Cortex A9. Ditto for Samsung, Hummingbird and the Cortex A8.
  • designerfx - Friday, January 4, 2013 - link

    do I need to remind you that the Tegra 3 has disabled cores on the RT? Using an actual android device with Tegra 3 would show better results.
  • madmilk - Friday, January 4, 2013 - link

    The disabled 5th core doesn't matter in loaded situations. During idle, screen power dominates, so it still doesn't really matter. About all you'll get is more standby time, and Atom seems to be doing fine there.
  • designerfx - Friday, January 4, 2013 - link

    The core allows a lot of different significant things - so in other words, it's extremely significant, including in high load situations as well.

    That has nothing to do with the Atom. You get more than standby time.
  • designerfx - Friday, January 4, 2013 - link

    also, during idle the screen is off, usually after whatever amount of time the settings are set for. Which is easily indicated in the idle measurements. What the heck are you even talking about?

Log in

Don't have an account? Sign up now