GPU Performance

3DMark

Although it's our first GPU test, 3DMark doesn't do much to show Adreno 420 in a good light. 3DMark isn't the most GPU intensive test we have, but here we see marginal increases over Snapdragon 800/Adreno 330. I would be interested in seeing if there are any improvements on the power consumption front since performance doesn't really change.

3DMark 1.2 Unlimited - Overall

3DMark 1.2 Unlimited - Graphics

3DMark 1.2 Unlimited - Physics

 

Basemark X 1.1

Basemark X 1.1 starts to show a difference between Adreno 420 and 330. At medium quality settings we see a 25% increase in performance over the Snapdragon 801 based Adreno 330 devices. Move to higher quality settings and the performance advantage increases to over 50%. Here even NVIDIA's Shield with Tegra 4 cooled by a fan can't outperform the Adreno 420 GPU.

BaseMark X 1.1 - Overall (Medium)

BaseMark X 1.1 - Overall (High Quality)

BaseMark X 1.1 - Dunes (Medium, Offscreen)

BaseMark X 1.1 - Hangar (Medium, Offscreen)

BaseMark X 1.1 - Dunes (High Quality, Offscreen)

BaseMark X 1.1 - Hangar (High Quality, Offscreen)

GFXBench 3.0

GFXBench 3.0 Manhattan (Onscreen)

Manhattan continues to be a very stressful test but the onscreen results are pretty interesting. Adreno 420 can drive a 2560 x 1440 display at the same frame rate that Adreno 330 could drive a 1080p display.

GFXBench 3.0 Manhattan (Offscreen)

In an apples to apples comparison at the same resolution, Adreno 430 is over 50% faster than Adreno 330. It's also faster than the PowerVR G6430 in the iPad Air.

GFXBench 3.0 T-Rex HD (Onscreen)

Once again we see an example where Adreno 420 is able to drive the MDP/T's panel at 2560 x 1440 at the same performance as Adreno 330 can deliver at 1080p

GFXBench 3.0 T-Rex HD (Offscreen)

At 1080p, the Adreno 420/S805 advantage grows to 45%.

I've included all of the low level GFXBench tests below if you're interested in digging any deeper. It's interesting that we don't see a big increase in the ALU test but far larger increases in the alpha blending and fill rate tests.

GFXBench 3.0 ALU Test (Onscreen)

GFXBench 3.0 ALU Test (Offscreen)

GFXBench 3.0 Alpha Blending Test (Onscreen)

GFXBench 3.0 Alpha Blending Test (Offscreen)

GFXBench 3.0 Driver Overhead Test (Offscreen)

GFXBench 3.0 Driver Overhead Test (Onscreen)

GFXBench 3.0 Fill Rate Test (Offscreen)

GFXBench 3.0 Fill Rate Test (Onscreen)

CPU Performance Final Words
Comments Locked

149 Comments

View All Comments

  • ArthurG - Thursday, May 22, 2014 - link

    What do we care about clock speeds ? is it now a new metric of performance ? Is A7 running at only 1.3Ghz a slow SoC ? Architecture efficiency and final performance results are what we care about.
    What is important is that TK1 in MiPad destroys all other Android SoC by good margin (60fps T-rex vs 40 on S805) and with good power efficiency.
    is it so difficult to admit it for nv haters ?
  • hahmed330 - Friday, May 23, 2014 - link

    If I run my (nexus 7 2013) playing asphalt 8. My battery runs out in 2 hours only on 50% brightness.
    I can tell you Tegra K1+RAM on TK1 Jetson consumes 6980mW running full tilt at 950mhz for an actively cooled device. Now remember this is a non mobile device for developers.

  • ArthurG - Wednesday, May 21, 2014 - link

    well your post shows big ignorance of the products.
    1/ Tegra 4 was on 28HPL process when S800/801/805 use 28HPM that provides nearly 30% better transistors. oranges vs apples and big advantage to QC
    2/ T4 uses A15r1 that is not very well optimized for power efficiency. TK1 is now with A15r3 that provides better efficiency.
    3/ Tegra K1 and S800/801/805 are made on the the same 28HPM process, so it's a fair comparison.

    That means that T4 vs S800/S8001 was an easy win for QC due to many disadvantage over T4 design. But TK1 vs S80x shows completely different story with both using same node.

    Finally, TK1 benchs are in the Xiaomi MiPad, a 7.9" tablet, no fan here, and it still smokes S805...
  • testbug00 - Thursday, May 22, 2014 - link

    I will believe it when I See them... Also, the amazing "950Mhz" clockspeed of the K1? Guess where it is in the MiPad?

    In the 600Mhz range. NVidia has to downclock its parts to fit into tablets. Much less phones.

    2. Process choice is a manufacturing choice. Nvidia could not get a HPM design? They suffered. Anyhow, Qualcomm will still probably smoke on pref/watt... which, once again, is what is really important in the phone and in most tablets.

    3. Krait "450" cores are the same Krait cores from the 800 (a 2013 product) with more clockspeed. A15r3 is a 2014 product. I can throw meaningless garbage into "fair comparisons" also. You compare the SoC as a whole... K1 will end up faster than the 805. I am convinced of it. Will it matter? Not unless you are looking at putting a chip into a miniPC or a laptop... Or, perhaps a mobile-gamecontroller-with-a-screen. :)

    Cannot wait for SHIELD 2.
  • kron123456789 - Thursday, May 22, 2014 - link

    Read this:
    developer.download.nvidia.com/embedded/jetson/TK1/docs/Jetson_platform_brief_May2014.pdf
  • fivefeet8 - Thursday, May 22, 2014 - link

    The MiPad can get the performance numbers they've shown with a GPU clocked at 600 mhz? And that's a bad thing?
  • testbug00 - Thursday, May 22, 2014 - link

    Depends on how/when Samsung, Qualcomm, Mediatek, Rockchip, etc. introduce their next generation chips and how fast they are.

    Apple's will likely beat this on GPU and CPU while using less power... Because, well, Apple has and continues to spend tons of money on optimization. The biggest part is die size... which is a huge advantage Apple has.
  • kron123456789 - Thursday, May 22, 2014 - link

    Apple is using PowerVR GPU. And only GX6650 is comparable with Tegra K1(but it's have about 600-650MHz max frequency, and, because of that, less GFLOPS)
  • testbug00 - Thursday, May 22, 2014 - link

    and if the K1 cannot run at full clockspeed in phones/tablets due to setting reasonable clockspeeds or throttling?
    Or if PowerVR GPU adds more units, or raises Mhz (no reason to do these, as, raising power consumption (Without partial redesign) and these parts are typically "use X power, get as much speed as possible"
    Or if Apple happens to license a GPU architecture from a company they are close with, say, PowerVR... ;)
  • ArthurG - Thursday, May 22, 2014 - link

    apple faster ? proof ?
    All these SoC manufacturers rely on other companies tech to build their GPU. Unlike Nvidia, they can't come up with something new if their IP supplier doesn't have it. And for now, like it or not, no GPU available this year will be more powerful than mobile Kepler. Shallow it, you can't do anything against that.

Log in

Don't have an account? Sign up now