WebXPRT

I also included Principled Technologies' new HTML5/js web test suite WebXPRT in our power analysis. Like the rest of the tests, Intel already outperforms NVIDIA here but does so with lower power consumption. A big part of the advantage continues to be lower power consumption on the GPU rail, surprisingly enough.

TouchXPRT

As our first native client test, we turned to PT's TouchXPRT 2013. As there is no "run-all" functionality in the TouchXPRT benchmark, we had to present individual power curves for each benchmark. The story told here is really more of the same. On the CPU side, Intel is able to deliver better performance at lower power consumption. On the GPU side, performance is good enough for these tasks but once again, is delivered at lower power consumption.

 

 

 

SunSpider 0.9.1, Kraken & RIABench GPU Power Consumption
POST A COMMENT

163 Comments

View All Comments

  • felixyang - Friday, December 28, 2012 - link

    You get the keypoint. Reply
  • st.bone - Tuesday, December 25, 2012 - link

    Now that's just out right ignorant on your comment, that's if you even took the time to read the article Reply
  • jeffkibuule - Monday, December 24, 2012 - link

    This kind of makes me shake my head as to why a year old SoC was used when Samsung was shipping Exynos 5250 and Qualcomm had APQ8064, heck nVidia has Tegra4 just waiting in the wings for a likely CES announcement (I know why, bad timing).

    My only hope is that in 2013, ARM and Atom SoCs can support 1080p displays, I don't think I can use anything less without wanting to poke my eyes out.
    Reply
  • kyuu - Monday, December 24, 2012 - link

    Agreed. This article mostly highlights what an out-of-date SoC Tegra3 is, and how bad it is without its companion core. Which is why its so perplexing that Microsoft went with the Tegra 3 for the Surface. I can only guess that Nvidia must be practically giving away Tegra3 nowadays, otherwise I have no idea why anyone would use it.

    I'm not sure it's a huge win for Intel that Clover Trail beats a mostly obsolete ARM SoC in power use with such an incredibly mediocre GPU.

    A more interesting comparison would be between a current-gen Qualcomm chipset and/or Samsung's current-gen Exynos.
    Reply
  • kyuu - Monday, December 24, 2012 - link

    To clarify the second "paragraph", Clover Trail is the one with the mediocre GPU. Reply
  • lmcd - Tuesday, December 25, 2012 - link

    They're both mediocre GPUs at this point. Didn't Adreno 225 keep pace with the T3 GPU? Reply
  • jeffkibuule - Tuesday, December 25, 2012 - link

    The reasoning for going for Tegra 3 was pretty obvious. They needed a more finalized chip to develop Windows RT against and Tegra 3 was the only thing available. Relying on a faster class of SoCs in the Tegra 4, Snapdragon S4 Pro, or Exynos 5250 would mean delaying the Windows RT version of tablets by several months at least since I doubt development would have been done in time. Reply
  • lmcd - Tuesday, December 25, 2012 - link

    S4 standard would've done better more likely. Clover Trail demo'ed how viable two cores was for W8, let alone W8 RT Reply
  • wsw1982 - Tuesday, December 25, 2012 - link

    There is already the comparison between the medfield, krait, swift and Exynos 5250 on the android platform, isn't there? I think microsoft is the one to blame that you can only compare clover trails and tegra 3 on windows platform, LOL Reply
  • Krysto - Monday, December 24, 2012 - link

    Intel will not have anything that even approaches next year's ARM SoC's GPU's anytime soon. And they can't match Cortex A15 in performance anytime soon either. Reply

Log in

Don't have an account? Sign up now