CPU Performance

As always we'll start out our performance investigation with a handful of CPU bound web browser based tests. In all cases we used Chrome on the MDP/T. Remember there's only an 8% increase in peak CPU frequency here, so I wouldn't expect a huge difference vs. Snapdragon 801.

SunSpider 1.0.2 Benchmark  (Chrome/Safari/IE)

Here the MDP/T scales pretty well, showing a 6% improvement in performance over the Snapdragon 801 based Galaxy S 5. In the case of the GS5 we are looking at a 2.5GHz Snapdragon 801 implementation, so the improvement makes sense. Both the Cortex A15 (TF701T/Shield) and Apple's Cyclone (in the iPad Air) are higher performing designs here. Since there's no fundamental change to Krait's IPC, the only gains we see here are from the higher clock speed.

Kraken 1.1 (Chrome/Safari/IE)

Kraken appears to be at its limit when it comes to Krait 400/450, there's effectively no additional frequency scaling beyond 2.3GHz. We're either running into an architectural limitation or limits of the software/browser combination itself.

Google Octane v2  (Chrome/Safari/IE)

Similarly we don't see any real progress in the Google Octane test either. Snapdragon 805's CPU cores may run at a higher peak frequency but that's definitely not the story here.

Basemark OS II

Basemark OS II gives us a look at native application performance across a variety of metrics. There are tests that hit the CPU, GPU as well as storage subsystems here. The gains here are exclusively on the graphics side, which makes sense given what we've just seen. Snapdragon 805's biggest gains will be GPU facing.

BaseMark OS II - Overall

BaseMark OS II - System

BaseMark OS II - Memory

BaseMark OS II - Graphics

BaseMark OS II - Web

Geekbench 3.0

Although I don't typically use Geekbench, I wanted to include some numbers here to highlight that the increase in memory bandwidth for S805 over S801 doesn't really benefit the CPU cores:

Geekbench 3.0
  Snapdragon 801 2.3GHz (HTC M8) Snapdragon 805 2.7GHz (MDP/T) % Increase for S805
Overall (Single thread) 1001 1049 4.8%
Overall (Multi-threaded) 2622 2878 9.7%
Integer (Single thread) 956 996 4.2%
Integer (Multi-threaded) 2999 3037 1.3%
FP (Single thread) 843 925 9.7%
FP (Multi-threaded) 2636 3155 19.7%
Memory (Single thread) 1411 1406 0%
Memory (Multi-threaded) 1841 1949 6%

I wouldn't read too much into the multithreaded FP results, I suspect we're mostly seeing differences in thermal dissipation of the two test units. A closer look at the memory bandwidth numbers confirms that while the 805 has more memory bandwidth, most of it is reserved for GPU use:

Geekbench 3.0 - Memory Bandwidth
  Snapdragon 801 2.3GHz (HTC M8) Snapdragon 805 2.7GHz (MDP/T) % Increase for S805
Stream Copy (Single thread) 7.89 GB/s 8.04 GB/s 1.9%
Stream Copy (Multi-threaded) 9.53 GB/s 10.1 GB/s 5.9%
Stream Scale (Single thread) 5.36 GB/s 5.06 GB/s -
Stream Scale (Multi-threaded) 7.31 GB/s 7.63 GB/s 4.3%
Stream Add (Single thread) 5.27 GB/s 5.2 GB/s -
Stream Add (Multi-threaded) 6.84 GB/s 7.51 GB/s 9.8%
Stream Triad (Single thread) 5.64 GB/s 5.85 GB/s 3.7%
Stream Triad (Multi-threaded) 7.65 GB/s 7.89 GB/s 3.1%


Introduction GPU Performance
Comments Locked


View All Comments

  • ArthurG - Wednesday, May 21, 2014 - link

    ok so this is the year when Qualcomm lost their leadership as 2014 fastest Android SoC appears to be Tegra K1. Much faster GPU (26fps in Manhattan offscreen, 60fps in T-rex offscreen) and faster CPU too (see Xiaomi MiPad benchs).
    20nm Erista with Maxwell coming same time as S810, Nvidia will even make the gap wider on next generation...
  • testbug00 - Wednesday, May 21, 2014 - link

    Does it matter if Nvidia cannot do it without a fan and higher power usage?

    The SHIELD is a perfect example of why Nvidia fails to win against Qualcomm in meaningful terms: In the Phone/Tablet market, performance does not matter. Pref/power, and, absolute power do matter.

    Until Nvidia learns to make things in lower power envelopes (the T4i is a decent example) they will lose to Qualcomm in meaningful ways.

    On that note, The "amazing" K1 chip will be clocked in the 600Mhz range in the first Tablets it comes in... How downclocked will Qualcomm's part be?

    Anyhow, if you need absolute performance in the SoC space (aka you are using it as a desktop, or, perhaps a laptop) NVidia is the player to go to. Otherwise, Qualcomm is plain better for phones/tablets.
  • grahaman27 - Wednesday, May 21, 2014 - link

    thats not true, take the tegra note for example. the K1 uses an updated A15 that is more power efficient. then when the custom denver chip hits, it will start taking names and cashing checks.
  • testbug00 - Thursday, May 22, 2014 - link

    When Nvidia gets a product with the lines in them, and power tests are done, I will believe it.

    Until than, Tegra 2, 3, and 4 all cast doubt on that.

    __could it happen__ YES!
    __Will it happen__ I DOUBT IT :/ :(
  • fivefeet8 - Thursday, May 22, 2014 - link

    You do realize that people have already done power tests with the TK1 boards right? Or are you pretty much ignoring anything until you get your "product with lines in them" argument. If that's the case, then I'm pretty sure the Snapdragon 805 fits in the same boat.
  • testbug00 - Thursday, May 22, 2014 - link

    Qualcomm has a history of products being used by OEMs... NVidia has Tegra 2... tons of large OEM design wins... Tegra 3, many large OEM design wins, but, far less... Tegra 4... no large OEM design wins, until they finally got a win in China.

    Nvidia did not lose design wins magically. Tegra consumed more power than they claimed, and, may have been slower than their claimed.
  • fivefeet8 - Thursday, May 22, 2014 - link

    Why are you arguing about design wins when I'm talking about your power test argument? Have you not seen the power information from users with a TK1 board or not? Or are simply ignoring their findings?
  • testbug00 - Thursday, May 22, 2014 - link

    Nvidia lost design wins because they lied about their power usage and performance to OEMs repeatably.

    Nvidia had plenty of support with Tegra 2, the OEMs though the clockspeeds and power usage looked great... the final product, well, noticeably slower at around the original power promised. So, OEMs grow wary of the next Tegra, but, NVidia might have made an honest mistake... They get number for Tegra 3... Looks great... Get real Tegra 3... substantially slower, more power (at that slower speed) than told.

    Nvidia tries to Tell OEMs about their great new Tegra product... and OEMs do not use it, well, major mobile OEMs do not. MS, HP, a few other companies, and, later a large Chinese company.

    I have no faith in Nvidia as far as Tegra is concerned. Companies that got Tegra 2, 3 (and perhaps Tegra 4 chips) got real chips. They ran at the clockspeeds NVidia promised, at the power Nvidia promised.

    Cherrypicking chips is not hard. It was done to large OEMs, in large enough numbers to let them design many design wins.

    I'm not sure why you think it cannot be done for developer boards.

    Tegra K1 does look like a good Tegra product finally coming around though. I do not think it will be as good as Nvidia says, based off of the past, but, I don't think it will be as run-of-the-mill as the rest of the Tegra chips were.
  • ams23 - Thursday, May 22, 2014 - link

    Tegra 4/4i was used by various large OEM's including: Asus, HP, Toshiba, Xiaomi, LG, Huawei, ZTE. Of course, that is just smoke, because it has nothing to do with Tegra K1 performance nor power efficiency.
  • testbug00 - Thursday, May 22, 2014 - link

    Here is Nvidia's website showing Tegra devices: http://www.nvidia.com/object/tegra-phones-tablets....

    How many TEGRA 4 (Tegra 4i is a different product, it is a fine product) phones are there? One. By an OEM that also used Qualcomm for the same Phone... I am sure the battery life tests are are the Qualcomm device (happy to be proven wrong)

    Otherwise, you have a bunch of PC manufacturers (not large mobile OEMs...) with Tablets/laptops/AIOs. Oh, and, 2 of the 11 products are made by NVidia themselves.

    Tegra 4i on the other hand, well, it is what Tegra 3 should have been than some. It is a fine product.

Log in

Don't have an account? Sign up now