GPU Performance

3DMark

Although it's our first GPU test, 3DMark doesn't do much to show Adreno 420 in a good light. 3DMark isn't the most GPU intensive test we have, but here we see marginal increases over Snapdragon 800/Adreno 330. I would be interested in seeing if there are any improvements on the power consumption front since performance doesn't really change.

3DMark 1.2 Unlimited - Overall

3DMark 1.2 Unlimited - Graphics

3DMark 1.2 Unlimited - Physics

 

Basemark X 1.1

Basemark X 1.1 starts to show a difference between Adreno 420 and 330. At medium quality settings we see a 25% increase in performance over the Snapdragon 801 based Adreno 330 devices. Move to higher quality settings and the performance advantage increases to over 50%. Here even NVIDIA's Shield with Tegra 4 cooled by a fan can't outperform the Adreno 420 GPU.

BaseMark X 1.1 - Overall (Medium)

BaseMark X 1.1 - Overall (High Quality)

BaseMark X 1.1 - Dunes (Medium, Offscreen)

BaseMark X 1.1 - Hangar (Medium, Offscreen)

BaseMark X 1.1 - Dunes (High Quality, Offscreen)

BaseMark X 1.1 - Hangar (High Quality, Offscreen)

GFXBench 3.0

GFXBench 3.0 Manhattan (Onscreen)

Manhattan continues to be a very stressful test but the onscreen results are pretty interesting. Adreno 420 can drive a 2560 x 1440 display at the same frame rate that Adreno 330 could drive a 1080p display.

GFXBench 3.0 Manhattan (Offscreen)

In an apples to apples comparison at the same resolution, Adreno 430 is over 50% faster than Adreno 330. It's also faster than the PowerVR G6430 in the iPad Air.

GFXBench 3.0 T-Rex HD (Onscreen)

Once again we see an example where Adreno 420 is able to drive the MDP/T's panel at 2560 x 1440 at the same performance as Adreno 330 can deliver at 1080p

GFXBench 3.0 T-Rex HD (Offscreen)

At 1080p, the Adreno 420/S805 advantage grows to 45%.

I've included all of the low level GFXBench tests below if you're interested in digging any deeper. It's interesting that we don't see a big increase in the ALU test but far larger increases in the alpha blending and fill rate tests.

GFXBench 3.0 ALU Test (Onscreen)

GFXBench 3.0 ALU Test (Offscreen)

GFXBench 3.0 Alpha Blending Test (Onscreen)

GFXBench 3.0 Alpha Blending Test (Offscreen)

GFXBench 3.0 Driver Overhead Test (Offscreen)

GFXBench 3.0 Driver Overhead Test (Onscreen)

GFXBench 3.0 Fill Rate Test (Offscreen)

GFXBench 3.0 Fill Rate Test (Onscreen)

CPU Performance Final Words
Comments Locked

149 Comments

View All Comments

  • testbug00 - Thursday, May 22, 2014 - link

    lol. All right, I'll just ignore the fact that Apple made a custom ARMv8 CPU, the fact it has a GPU team, to match its Uncore and CPU teams, and the fact that there are mobile GPU manufacturers who would reasonably license their overall design to Apple. Also, Imagine Tech (and ARM, and Qualcomm, and Samsung, and others) could call make GPUs that were faster than Tegra K1. But, there is no reason to. Waste of money, waste of power (in the design), etc.

    Oh, and ignore the fact that Apple is willing to spend more money on die. You have to remember that Nvidia is planning to sell these on a profit, that means they need to minimize die space. Typical semiconductor design comes down to Power, Performance and Area. NVidia Tegra seems to follow Performance, Area, Power. Tegra K1 might be the first to go Power, Area, Performance (impossible to tell without real retail products...)

    Apple, on the other hand, targets Power, Performance Area. That means that as long as the chip will fit inside the phone, they would be fine making a 200M^2 die. Making a larger die means you can reduce power due to various reasons. Of course, making a die smaller also allows you to reduce power by shortening distances (this (and lack of interdie connect, larger cache and faster caches) is a reason why Maxwell managed to reduce power so much).

    I am also using historical precidence:
    -NVidia claimed Tegra 2 brought mobile up to parity with Xbo360/PS3 (And Tegra K1, not sure about 3 and 4) which, well, Tegra 2 was not, Tegra K1 will be not (due to bandwidth for the most part, imho. Given it had more bandwidth, it certainly could beat the Xbox 360/PS3)
    -Nvidia showed Tegra 4 beating iPad (did it for Tegra 2 and 3? I don't remember) and it lost upon next iPad.
    -Nvidia claimed Tegra 2 was great pref/watt. And Tegra 3, and Tegra 4. They all were bad compared to Qualcomm (and Apple)

    I don't take Nvidia's claims for much, because, they stink. Hopefully Tegra K1 fixes it. I would rather we did not have a player in the market "die" (read: move to focusing almost wholly on automotive) especially not it being after that company finally got its act together.
  • name99 - Thursday, May 22, 2014 - link

    Without going into Apple being faster, it's clearly silly to claim that "all the SOC manufacturers rely on other companies tech to build their GPU". Who is "all the SOC manufacturers"?
    Qualcomm use their own GPU, as does nV. Soon enough so will AMD.

    Apple is the only interesting SoC manufacturer that uses someone else's tech for their GPU, and their are plenty of indications (eg large buildup in GPU HW hires) that this is going to change soon --- if not with the A8, then with the A9.
  • fivefeet8 - Thursday, May 22, 2014 - link

    It's hard to have coherent dialog with you going on tangents. For this year it seems the competition from Qualcomm will be the 805 which we now know will not be as performant as the Tegra K1.
  • tuxRoller - Thursday, May 22, 2014 - link

    How do we KNOW this?
    I've struggled to find third part, comprehensive benchmarks for either the mipad or the tk1, especially ones that include power draw (those numbers some random person threw up in a forum aren't tremendously useful, with regards the mipad).
    Also, the adreno420's drivers are, apparently, not well optimized.
    Basically, until AT, toms, or someone similar get their hands on it I won't feel like I know how things stackup.
  • fivefeet8 - Friday, May 23, 2014 - link

    There are slides from the IHV presentation for the MiPad showing power usage patterns. Phoronix also did some testing of the TK1 board which shows power usage well below what some seem to be thinking. As for Adreno drivers, they've always been bad and not well optimized.

    https://dolphin-emu.org/blog/2013/09/26/dolphin-em...
  • tuxRoller - Friday, May 23, 2014 - link

    When did phoronix release power numbers?
    The mipad presentation looked like copy paste from the Nvidia material.
    The adreno drivers aren't great. Which should tell you how good that hardware really is. Rob Clark, the lead developer of the open source freedreno driver, is already at least matching their performance up to opengl 2.1 gl|es 2. He's mentioned that he's found evidence of the hardware supporting advanced gl extensions not advertised by the driver. This may change as Qualcomm has recently joined linaro so they will probably be seeking to work more with upstream. The end result of that process is always better quality.
    Lastly, don't forget that adreno is a legacy of bitboys, and that Qualcomm is no Intel, even though they are the same size. Qualcomm seems to actually be interested in making top performing GPUs.
  • Ghost420 - Friday, May 23, 2014 - link

    isn't Apple's die known to be the biggest of all the SOCs? they can afford to have a big power sucking SOC cuz they can optimize for it. only reason Iphones last long on battery...besides being lower clocked, cuz look how small and dinky that screen is...
  • Ghost0420 - Wednesday, May 28, 2014 - link

    Exactly, it's DOWNCLKED to 600Mhz and still spanking the competition
  • kron123456789 - Thursday, May 22, 2014 - link

    "Guess where it is in the MiPad?

    In the 600Mhz range. " — Proof? BTW, even if it is, MiPad is still more powerful than S805 MDP.
  • testbug00 - Thursday, May 22, 2014 - link

    Proof is loose... based off of talking with someone who has worked closely with Nvidia and TSMC in the past (Tesla, Fermi, a few Tegra chips).

    They have been quite accurate before... When the tablet comes we will see it.

    On the other hand, silence tells us a lot also... Where is Nvidia's talking about their "950Mhz GPU" in a tablet? I think the 600Mhz (by that, I should clarify I mean between 600 and 699) clockspeed band is still quite impressive... Just, well, it shows why the chip won't go into Phones...

Log in

Don't have an account? Sign up now