Final Words

Qualcomm tends to stagger the introduction of new CPU and GPU IP. Snapdragon 805 ultimately serves as Qualcomm's introduction vehicle for its Adreno 420 GPU. The performance gains there over Adreno 330/Snapdragon 801 can be substantial, particularly at high resolutions and/or higher quality settings. Excluding 3DMark, we saw a 20 - 50% increase in GPU performance compared to Snapdragon 801. Adreno 420 is a must have if you want to drive a higher resolution display at the same performance as an Adreno 330/1080p display combination. With OEMs contemplating moving to higher-than-1080p resolution screens in the near term, leveraging Snapdragon 805 may make sense there.

The gains on the CPU side are far more subtle. At best we noted a 6% increase in performance compared to a 2.5GHz Snapdragon 801, but depending on thermal/chassis limitations of shipping devices you may see even less of a difference.

Qualcomm tells us that some of its customers will choose to stay on Snapdragon 801 until the 810 arrives next year, while some will choose to release products based on 805 in the interim. Based on our results here, if an OEM is looking to specifically target the gaming market I can see Snapdragon 805 making a lot of sense. For most of those OEMs that just launched Snapdragon 801 based designs however, I don't know that there's a huge reason to release a refresh in the interim.

I am curious to evaluate the impact of ISP changes as well as dive deeper into 4K capture and H.265 decode, but that will have to wait until we see shipping designs. The other big question is just how power efficient Adreno 420 is compared to Adreno 330. Qualcomm's internal numbers are promising, citing a 20% reduction in power consumption at effectively the same performance in GFXBench's T-Rex HD onscreen test.

GPU Performance
Comments Locked

149 Comments

View All Comments

  • testbug00 - Thursday, May 22, 2014 - link

    lol. All right, I'll just ignore the fact that Apple made a custom ARMv8 CPU, the fact it has a GPU team, to match its Uncore and CPU teams, and the fact that there are mobile GPU manufacturers who would reasonably license their overall design to Apple. Also, Imagine Tech (and ARM, and Qualcomm, and Samsung, and others) could call make GPUs that were faster than Tegra K1. But, there is no reason to. Waste of money, waste of power (in the design), etc.

    Oh, and ignore the fact that Apple is willing to spend more money on die. You have to remember that Nvidia is planning to sell these on a profit, that means they need to minimize die space. Typical semiconductor design comes down to Power, Performance and Area. NVidia Tegra seems to follow Performance, Area, Power. Tegra K1 might be the first to go Power, Area, Performance (impossible to tell without real retail products...)

    Apple, on the other hand, targets Power, Performance Area. That means that as long as the chip will fit inside the phone, they would be fine making a 200M^2 die. Making a larger die means you can reduce power due to various reasons. Of course, making a die smaller also allows you to reduce power by shortening distances (this (and lack of interdie connect, larger cache and faster caches) is a reason why Maxwell managed to reduce power so much).

    I am also using historical precidence:
    -NVidia claimed Tegra 2 brought mobile up to parity with Xbo360/PS3 (And Tegra K1, not sure about 3 and 4) which, well, Tegra 2 was not, Tegra K1 will be not (due to bandwidth for the most part, imho. Given it had more bandwidth, it certainly could beat the Xbox 360/PS3)
    -Nvidia showed Tegra 4 beating iPad (did it for Tegra 2 and 3? I don't remember) and it lost upon next iPad.
    -Nvidia claimed Tegra 2 was great pref/watt. And Tegra 3, and Tegra 4. They all were bad compared to Qualcomm (and Apple)

    I don't take Nvidia's claims for much, because, they stink. Hopefully Tegra K1 fixes it. I would rather we did not have a player in the market "die" (read: move to focusing almost wholly on automotive) especially not it being after that company finally got its act together.
  • name99 - Thursday, May 22, 2014 - link

    Without going into Apple being faster, it's clearly silly to claim that "all the SOC manufacturers rely on other companies tech to build their GPU". Who is "all the SOC manufacturers"?
    Qualcomm use their own GPU, as does nV. Soon enough so will AMD.

    Apple is the only interesting SoC manufacturer that uses someone else's tech for their GPU, and their are plenty of indications (eg large buildup in GPU HW hires) that this is going to change soon --- if not with the A8, then with the A9.
  • fivefeet8 - Thursday, May 22, 2014 - link

    It's hard to have coherent dialog with you going on tangents. For this year it seems the competition from Qualcomm will be the 805 which we now know will not be as performant as the Tegra K1.
  • tuxRoller - Thursday, May 22, 2014 - link

    How do we KNOW this?
    I've struggled to find third part, comprehensive benchmarks for either the mipad or the tk1, especially ones that include power draw (those numbers some random person threw up in a forum aren't tremendously useful, with regards the mipad).
    Also, the adreno420's drivers are, apparently, not well optimized.
    Basically, until AT, toms, or someone similar get their hands on it I won't feel like I know how things stackup.
  • fivefeet8 - Friday, May 23, 2014 - link

    There are slides from the IHV presentation for the MiPad showing power usage patterns. Phoronix also did some testing of the TK1 board which shows power usage well below what some seem to be thinking. As for Adreno drivers, they've always been bad and not well optimized.

    https://dolphin-emu.org/blog/2013/09/26/dolphin-em...
  • tuxRoller - Friday, May 23, 2014 - link

    When did phoronix release power numbers?
    The mipad presentation looked like copy paste from the Nvidia material.
    The adreno drivers aren't great. Which should tell you how good that hardware really is. Rob Clark, the lead developer of the open source freedreno driver, is already at least matching their performance up to opengl 2.1 gl|es 2. He's mentioned that he's found evidence of the hardware supporting advanced gl extensions not advertised by the driver. This may change as Qualcomm has recently joined linaro so they will probably be seeking to work more with upstream. The end result of that process is always better quality.
    Lastly, don't forget that adreno is a legacy of bitboys, and that Qualcomm is no Intel, even though they are the same size. Qualcomm seems to actually be interested in making top performing GPUs.
  • Ghost420 - Friday, May 23, 2014 - link

    isn't Apple's die known to be the biggest of all the SOCs? they can afford to have a big power sucking SOC cuz they can optimize for it. only reason Iphones last long on battery...besides being lower clocked, cuz look how small and dinky that screen is...
  • Ghost0420 - Wednesday, May 28, 2014 - link

    Exactly, it's DOWNCLKED to 600Mhz and still spanking the competition
  • kron123456789 - Thursday, May 22, 2014 - link

    "Guess where it is in the MiPad?

    In the 600Mhz range. " — Proof? BTW, even if it is, MiPad is still more powerful than S805 MDP.
  • testbug00 - Thursday, May 22, 2014 - link

    Proof is loose... based off of talking with someone who has worked closely with Nvidia and TSMC in the past (Tesla, Fermi, a few Tegra chips).

    They have been quite accurate before... When the tablet comes we will see it.

    On the other hand, silence tells us a lot also... Where is Nvidia's talking about their "950Mhz GPU" in a tablet? I think the 600Mhz (by that, I should clarify I mean between 600 and 699) clockspeed band is still quite impressive... Just, well, it shows why the chip won't go into Phones...

Log in

Don't have an account? Sign up now