Final Words

Ultimately I don't know that this data really changes what we already knew about Clover Trail: it is a more power efficient platform than NVIDIA's Tegra 3. I summed up the power consumption advantage in the table below (I left out the GPU numbers since I'm not totally clear with what NVIDIA attaches to the GPU power rail on Tegra 3):

Power Consumption Comparison
  Surface RT W510 Surface RT (CPU) W510 (CPU)
Idle 3.182W 2.474W 70.2mW 36.4mW
Cold Boot 5.358W 3.280W 800mW 216mW
SunSpider 0.9.1 4.775W 3.704W 722mW 520mW
Kraken 4.738W 3.582W 829mW 564mW
RIABench 3.962W 3.294W 379mW 261mW
WebXPRT 4.617W 3.225W 663mW 412mW
TouchXPRT (Photo Enhance) 4.789W 3.793W 913mW 378mW
GPU Workload 5.395W 3.656W 1432mW 488mW

Across the board Intel manages a huge advantage over NVIDIA's Tegra 3. Again, this shouldn't be a surprise. Intel's 32nm SoC process offers a big advantage over TSMC's 40nm G used for NVIDIA's Cortex A9 cores (the rest of the SoC is built on LP, the whole chip uses TSMC's 40nm LPG), and there are also the architectural advantages that Atom offers over ARM's Cortex A9. As we've mentioned in both our Medfield and Clover Trail reviews: the x86 power myth has been busted. I think it's very telling that Intel didn't show up with an iPad for this comparison, although I will be trying to replicate this setup on my own with an iPad 4 to see if I can't make it happen without breaking too many devices. We've also just now received the first Qualcomm Krait based Windows RT tablets, which will make another interesting comparison point going forward.

Keeping in mind that this isn't Intel's best foot forward either, the coming years ahead should provide for some entertaining competition. In less than a year Intel will be shipping its first 22nm Atom in tablets, while NVIDIA will quickly toss Tegra 3 aside in favor of the Cortex A15 based 28nm Wayne (Tegra 4?) SoC in the first half of next year. Beating up on Surface RT today may be fun for Intel, but next year won't be quite as easy. The big unknown in all of this is of course what happens when Core gets below 10W. Intel already demonstrated Haswell at 8W - it wouldn't be too far fetched to assume that Intel is gunning for Swift/Cortex A15 with a Core based SoC next year.

Here's where it really gets tricky: Intel built the better SoC, but Microsoft built the better device - and that device happens to use Tegra 3. The days of Intel simply building a chip and putting it out in the world are long gone. As it first discovered with Apple, only through a close relationship with the OEM can Intel really deliver a compelling product. When left to their own devices, the OEMs don't always seem to build competitive devices. Even despite Intel's significant involvement in Acer's W510, the tablet showed up with an unusable trackpad, underperforming WiFi and stability issues. Clover Trail has the CPU performance I want from a tablet today, but I want Apple, Google or Microsoft to use it. I do have hope that the other players will wake up and get better, but for next year I feel like the tune won't be any different. Intel needs design wins among the big three to really make an impact in the tablet space.

The good news is Microsoft is already engaged with Surface Pro. It's safe to bet that there will be a Haswell version coming as well. Now Intel just needs an iPad and a Nexus win.

Wireless Web Browsing Battery Life Test
Comments Locked

163 Comments

View All Comments

  • karasaj - Monday, December 24, 2012 - link

    All they need to do is either put intel HD graphics (Haswell) or license a better gpu from Imagination, I imagine. Although ARM (Samsung?) have really been developing better GPUs lately, they seem to be catching up.
  • jeffkibuule - Tuesday, December 25, 2012 - link

    They've already stated they will be integrating a variant of their Intel HD 4000 GPU in their next-generation Atom SoC, the only question is how many Execution Units and what kind of power profile their will be targeting.

    With Intel, the question isn't so much about performance, but maximizing profits. If they build an Atom SoC that's so great and also cost competitive with other ARM chips, who would buy their more expensive Core CPUs? This is one reason why I believe that the Atom and Core lines will eventually have to merge, just like how the Pentium and Pentium M lines had to converge into the original Core series back in 2006 (oh, how the irony in history repeating itself).
  • lmcd - Tuesday, December 25, 2012 - link

    It's more likely a variant of the 2500, which won't be enough. 4k doesn't even beat the 543MP3 does it?
  • jeffkibuule - Tuesday, December 25, 2012 - link

    There haven't really been any comparisons of mobile and smartphone GPUs yet. We'll have to wait for 3DMark for Windows 8 to get our first reliable comparison.
  • wsw1982 - Tuesday, December 25, 2012 - link

    I think it depends, 16 543MP3 cores should beat the 4K :) Single core 543MP3 is not better than 545.
  • mrdude - Wednesday, December 26, 2012 - link

    It's also a matter of TDP, though. The ARM SoCs pack a lot of punch on the CPU side but with often better GPU performance at an equal footing with respect to TDP (sub-2W for smartphones and ~sub-5W for tablets).

    As much as Intel wants to pound home the point that x86 is power efficient, it's an SoC and therefore a package deal. Intel still suffers from the lopsided design approach, dedicating far too much die space to the CPU with the GPU an afterthought. If you look at the more successful and popular/powerful ARM SoCs, it tends to be the other way around. A balanced approach with great efficiency is what makes the Snapdragon S4's such fantastic SoCs and why Qualcomm has now surpassed Intel in total market cap. The GPU is only going to become more and more important going forward due to PPI increasing drastically. At least for Apple, they've already reached a point where they're required to spend a huge portion of the die to the GPU with smaller, incremental bumps in CPU performance.

    This really seems like Intel is shoehorning their old Atom architecture into a lower TDP, saying: "Look! It's efficient! Just don't pay any attention to the fact that we're comparing it to a 40nm Tegra 3 and don't you dare do any GPU benchmarks." These things are meant for tablets, are Intel not aware just how much MORE the GPU matters? Great perf-per-watt (maybe), but that's all for nothing if the SoC sucks.
  • somata - Monday, December 31, 2012 - link

    As others have said, it'll be nice once we can do proper comparisons between tablet/notebook/desktop GPUs, but in the meantime just consider the peak shader performance of each:
    Intel HD 4000 - 16(x4x2) @ 1.3GHz - 333 GFLOPS
    Intel HD 3000 - 12(x4) @ 1.3GHz - 125 GFLOPS
    PowerVR SGX 543MP4 - 16(x4) @ 300MHz - 38.4 GFLOPS
    PowerVR SGX 554MP4 - 32(x4) @ 300MHz - 76.8 GFLOPS
    The PowerVR numbers are based off of Anand's analysis. Obviously not exactly a fair comparison, but clearly Intel's mainstream integrated GPUs are substantially more powerful than any current PowerVR design. Of course that shouldn't be a surprise given the TDP of each platform.
  • p3ngwin1 - Tuesday, December 25, 2012 - link

    there are already smartphones with 1080P displays and Android tablets with even high resolutions :)
  • coolhund - Tuesday, December 25, 2012 - link

    Plus the Atom is not OoO, IO is known to use much less power. Plus the OS is not the same.
    Sorry, but for me this comparison is nonsense.
  • tipoo - Monday, December 24, 2012 - link

    I'll be very interested to read the Cortex A15 follow up. From what I gather, if compared on the same lithography the A15 core is much larger than the A9, which likely means more power, all else being equal. It brings performance up to and sometimes over the prior generation Atom, but I wonder what power requirement sacrifices were made, if any.

    I'm thinking in the coming years, Intel vs ARM will become a more interesting battle than Intel vs AMD.

Log in

Don't have an account? Sign up now