Final Words

Whereas I didn't really have anything new to conclude in the original article (Atom Z2760 is faster and more power efficient than Tegra 3), there's a lot to talk about here. We already know that Atom is faster than Krait, but from a power standpoint the two SoCs are extremely competitive. At the platform level Intel (at least in the Acer W510) generally leads in power efficiency. Note that this advantage could just as easily be due to display and other power advantages in the W510 itself and not necessarily indicative of an SoC advantage.

Looking at the CPU cores themselves, Qualcomm takes the lead. It's unclear how things would change if we could include L2 cache power consumption for Qualcomm as we do for Intel (see page 2 for an explanation). I suspect that Qualcomm does maintain the power advantage here though, even with the L2 cache included.

On the GPU side, Intel/Imagination win there although the roles reverse as Adreno 225 holds a performance advantage. For modern UI performance, the PowerVR SGX 545 is good enough but Adreno 225 is clearly the faster 3D GPU. Intel has underspecced its ultra mobile GPUs for a while, so a lot of the power advantage is due to the lower performing GPU. In 2D/modern UI tests however, the performance advantage isn't realized and thus the power advantage is still valid.

Qualcomm is able to generally push to lower idle power levels, indicating that even Intel's 32nm SoC process is getting a little long in the tooth. TSMC's 28nm LP and Samsung's 32nm LP processes both help silicon built in those fabs drive down to insanely low idle power levels. That being said, it is still surprising to me that a 5-year-old Atom architecture paired with a low power version of a 3-year-old process technology can be this competitive. In the next 9 - 12 months we'll finally get an updated, out-of-order Atom core built on a brand new 22nm low power/SoC process from Intel. This is one area where we should see real improvement. Intel's chances to do well in this space are good if it can manage to execute well and get its parts into designs people care about.


Device level power consumption, from our iPhone 5 review, look familiar?

If the previous article was about busting the x86 power myth, one key takeaway here is that Intel's low power SoC designs are headed in the right direction. Atom's power curve looks a lot like Qualcomm's, and I suspect a lot like Apple's. There are performance/power tradeoffs that all three make, but they're all being designed the way they should.

The Cortex A15 data is honestly the most intriguing. I'm not sure how the first A15 based smartphone SoCs will compare to Exynos 5 Dual in terms of power consumption, but at least based on the data here it looks like Cortex A15 is really in a league of its own when it comes to power consumption. Depending on the task that may not be an issue, but you still need a chassis that's capable of dissipating 1 - 4x the power of a present day smartphone SoC made by Qualcomm or Intel. Obviously for tablets the Cortex A15 can work just fine, but I am curious to see what will happen in a smartphone form factor. With lower voltage/clocks and a well architected turbo mode it may be possible to deliver reasonable battery life, but simply tossing the Exynos 5 Dual from the Nexus 10 into a smartphone isn't going to work well. It's very obvious to me why ARM proposed big.LITTLE with Cortex A15 and why Apple designed Swift.

I'd always heard about Haswell as the solution to the ARM problem, particularly in reference to the Cortex A15. The data here, particularly on the previous page, helped me understand exactly what that meant. Under a CPU or GPU heavy workload, the Exynos 5 Dual will draw around 4W. Peak TDP however is closer to 8W. If you remember back to IDF, Intel specifically called out 8W as a potential design target for Haswell. In reality, I expect that we'll see Haswell parts even lower power than that. While it may still be a stretch to bring Haswell down to 4W, it's very clear to me that Intel sees this as a possiblity in the near term. Perhaps not at 22nm, but definitely at 14nm. We already know Core can hit below 8W at 22nm, if it can get down to around 4W then that opens up a whole new class of form factors to a traditionally high-end architecture.

Ultimately I feel like that's how all of this is going to play out. Intel's Core architectures will likely service the 4W and above space, while Atom will take care of everything else below it. The really crazy part is that it's not too absurd to think about being able to get a Core based SoC into a large smartphone as early as 14nm, and definitely by 10nm (~2017) should the need arise. We've often talked about smartphones being used as mainstream computing devices in the future, but this is how we're going to get there. By the time Intel moves to 10nm ultramobile SoCs, you'll be able to get somewhere around Sandy/Ivy Bridge class performance in a phone.

At the end of the day, I'd say that Intel's chances for long term success in the tablet space are pretty good - at least architecturally. Intel still needs a Nexus, iPad or other similarly important design win, but it should have the right technology to get there by 2014. It's up to Paul or his replacement to ensure that everything works on the business side.

As far as smartphones go, the problem is a lot more complicated. Intel needs a good high-end baseband strategy which, as of late, the Infineon acquisition hasn't been able to produce. I've heard promising things in this regard but the baseband side of Intel remains embarassingly quiet. This is an area where Qualcomm is really the undisputed leader, Intel has a lot of work ahead of it here. As for the rest of the smartphone SoC, Intel is on the right track. Its existing architecture remains performance and power competitive with the best Qualcomm has to offer today. Both Intel and Qualcomm have architecture updates planned in the not too distant future (with Qualcomm out of the gate first), so this will be one interesting battle to watch. If ARM is the new AMD, then Krait is the new Athlon 64. The difference is, this time, Intel isn't shipping a Pentium 4.

Determining the TDP of Exynos 5 Dual
POST A COMMENT

140 Comments

View All Comments

  • StrangerGuy - Sunday, March 24, 2013 - link

    You really think having a slightly better chip would make Samsung risk everything to get locked into a chip with an ISA owned, designed and manufactured by one single sole supplier? And when that supplier in question historically has shown all sorts of monopolistic abuses?

    And when a quad A7 could already scroll desktop sites in Android capped at 60 fps additional performance provides very little real world advantage for most users. I'll even say most users would be annoyed by I/O bottlenecks like LTE speeds long before saying 2012+ class ARM CPUs are too slow.
    Reply
  • duploxxx - Monday, January 7, 2013 - link

    Now it is clear that when Intel provides that much material and resources that they know they are at least ok in the comapere against ARM on cpu and power... else they wouldn't make such a fuss...,

    but what about GPU? any decent benchmark or testing available on GPU performance.

    I played in december with HP envy2 and after some questioning they installed a few light games which were "ok" but i wonder how well the gpu in the atom really is, power consumption looks ok, but i preffer a performing gpu @ a bit higher power then a none performing one.
    Reply
  • memosk - Tuesday, January 8, 2013 - link

    It looks like old problem with PowerPc v.s. PC .
    PowerPC have had a faster Risc procesor and PC have had slower x86 like procesor .

    The end result was that clasical PC has won this Battle . Because of tradition and what is more important , you need knowledge about platform over users, producers, programers ...

    And you should think about economical thinks like mortage and whole enviroment named as Logistik.

    The same problem was Tesla vs. Edison. Tesla have had better ideas and Edison was Bussines-man. Who has won ? :)
    Reply
  • memosk - Tuesday, January 8, 2013 - link

    Nokia tryed seriosly sell windows 8 phones without SD cards And they said because of microsoft .

    How can you then compete again android with SD cards. But if you copy an Apple you think it has logic.

    You need generaly: complete and logic and OWN and CONSISTENT and ORIGINAL strategy.

    If you copy something it is dangerous that you strategy will be incostintent , leaky , "two way" , vague also with tragical errors like incompatibility or like legendary siemens phones: 1 crash per day . :D
    Reply
  • apozean - Friday, January 11, 2013 - link

    I studied the setup and it appears and Intel just wants to take on Nvidia's Tegra 3. Here are a couple of differences that I think are not highlighted appropriately:

    1. They used an Android tablet for Atom, Android tablet for Krait, but a Win RT (Surface) for Tegra 3. It must have been very difficult to fund a Google Nexus 7. Keeping the same OS across the devices would have controlled for a lot of other variables. Wouldn't it?

    2. Tegra 3 is the only quad core chip among chips being compared. Atom and Krait are dual-core. If all four cores are running, wouldn't it make a different to the idle power?

    3. Tegra 3 is built on 40nm and is one of the first A9 SoCs. In contrast, Atom is 32nm and Krait is 28nm.

    How does Tegra 3 fits in this setup?
    Reply
  • apozean - Friday, January 11, 2013 - link

    Fixing typos..

    I studied the setup and it appears that Intel just wants to take on Nvidia's Tegra 3. Here are a couple of differences that I think are not highlighted appropriately:

    1. They used an Android tablet for Atom, Android tablet for Krait, but a Win RT (Surface) for Tegra 3. It must have been very difficult to fund a Google Nexus 7. Keeping the same OS across the devices would have controlled for a lot of other system variables. Wouldn't it?

    2. Tegra 3 is the only quad core chip among chips being compared. Atom and Krait are dual-core. If all four cores are running, wouldn't it make a difference to the idle power?

    3. Tegra 3 is built on 40nm and is one of the first A9 SoCs. In contrast, Atom is 32nm and Krait is 28nm.

    How does Tegra 3 fit in this setup?
    Reply
  • some_guy - Wednesday, January 16, 2013 - link

    I thinking this may be the beginning of Intel being commoditied and the end of the juicy margins for most of their sales.

    I was just reading an article about how hedge funds love Intel. I don't see it, but that doesn't mean that the hedge funds would make money. Perhaps they know the earning report that is coming out soon, maybe tomorrow, will be good. http://www.insidermonkey.com/blog/top-semiconducto...
    Reply
  • some_guy - Wednesday, January 16, 2013 - link

    I meant to say "but that doesn't mean that the hedge funds won't make money." Reply
  • raptorious - Wednesday, February 20, 2013 - link

    but Anand has no clue what the rails might actually be powering. How do we know that the "GPU Rail" is in fact just powering the GPU and not the entire uncore of the SOC? This article is completely biased towards Intel and lacks true engineering rigor. Reply
  • EtTch - Tuesday, April 2, 2013 - link

    My take in all of this is that ARM and x86 is in comparable at this point when it comes to comparing the different instruction set architectures due to different the lithography size and the new 3d transistors. When ARM based SOC has finally all the physical features of the x86 then it's only truly comparable. Right now x86 is most likely to have a lower power consumption than ARM based processors that has a higher lithographic size than itself. (I really don't know what it's called but I'll go out on a limb and call it lithography size even though I know that I am most likely wrong) Reply

Log in

Don't have an account? Sign up now