GPU Performance

On the GPU side of things, Qualcomm's Snapdragon 820 is equipped with the Adreno 530 clocked at 624 MHz. In order to see how it performs, we ran it through our standard 2015 suite. In the future, we should be able to discuss how the Galaxy S7 performs in the context of our new benchmark suite as we test more devices on our new suite to determine relative performance.

GFXBench 3.0 Manhattan (Onscreen)

GFXBench 3.0 Manhattan (Offscreen)

GFXBench 3.0 T-Rex HD (Onscreen)

GFXBench 3.0 T-Rex HD (Offscreen)

BaseMark X 1.1 - Overall (High Quality)

BaseMark X 1.1 - Dunes (High Quality, Offscreen)

BaseMark X 1.1 - Hangar (High Quality, Offscreen)

At a high level, GPU performance appears to be mostly unchanged when comparing the Galaxy S7 to the Snapdragon 820 MDP. Performance in general is quite favorable assuming that the render resolution doesn't exceed 2560x1440.

Overall, the Adreno 530 is clearly one of the best GPUs you can get in a mobile device today. The Kirin 950's GPU really falls short in comparison. One could argue that turbo frequencies in a GPU don't make a lot of sense, but given that mobile gaming workloads can be quite bursty in nature and that gaming sessions tend to be quite short I would argue that having a GPU that can achieve significant levels of overdrive performance makes a lot of sense. The A9 is comparable if you consider the resolution of iOS devices, but when looking at the off-screen results the Adreno 530 pulls away. Of course, the real question now is how the Adreno 530 compares to the Exynos 8890's GPU in the international Galaxy S7, but that's a question that will have to be left for another day.

SoC and NAND Performance Display
Comments Locked

202 Comments

View All Comments

  • ah06 - Tuesday, March 8, 2016 - link

    Samsung's core is already done I think, its the Mongoose core in the 8890, the one in the international variants. It's a slightly weaker core than Kryo
  • Speedfriend - Tuesday, March 8, 2016 - link

    A7 to A8 tock was 15%, and that was partly higher clock speed in a bigger body. Will be interesting to see what this tock brings.
  • lilmoe - Tuesday, March 8, 2016 - link

    Unlike what most reviewers want to believe, when designing application processor cores, companies like ARM, Qualcomm and Samsung aim for a "sweet spot" of load-to-efficiency ratios, not MAX single threaded performance.

    Their benchmark is common Android workloads (which btw, rarely saturates a Cortex A57 at 1.8GHz), since it's what makes the vast majority of the mobile application processor market. They measure the average/mean workload needs and optimize efficiency for that.

    Android isn't as efficient as iOS and Windows Phone/10 Mobile in hardware acceleration and GPU compositing; it's much more CPU bound. It doesn't benefit as much from race to sleep in mobile devices. CPU cores remain significantly more active when rendering various aspects of the UI and scrolling.
  • tuxRoller - Tuesday, March 8, 2016 - link

    Can you explain how you measure the relative "efficiencies" of the "hardware acceleration and GPU compositing"?
  • lilmoe - Wednesday, March 9, 2016 - link

    By measuring CPU and RAM utilization when performing said tasks. More efficient implementations would offload more of the work to dedicated co-processors, (in this case, the GPU) and would use less RAM.

    Generally, the more CPU utilization you need for these tasks, the less efficient the implementation. Android uses more CPU power and more RAM for basic UI rendering than iOS and WP/10M.
  • tuxRoller - Saturday, March 12, 2016 - link

    How do you measure this so that you can ignore differences in the system (like textures chosen)? Then you'd have to make sure they're running on the same hardware.
    The best you can do is probably test Android and Windows on the same phone (this will put Windows at a bit of a disadvantage as Android allows very close coupling of drivers as their HAL is pretty permissive). Then you run a native game on each.
    If you've found a way to do this I, and Google, would love to see the results.
    Other than for 2d (which NOBODY, including directdraw/2d or quartz, fully accelerates), Google really hammers the GPU through use of shared memory, overlays and whatever else may be of use. There's obviously more optimization for them to do as they still overdraw WAY too much on certain apps, and they've obviously got a serious issue with their input latency, but it's a modern system. Probably the most modern as its been developed from scratch most recently.
  • Dobson123 - Tuesday, March 8, 2016 - link

    In the 2016 web browsing battery life test, the S6 Edge is 20% worse than the S6, and the LG G4's number is also way too low.
  • lilmoe - Tuesday, March 8, 2016 - link

    I also thought the difference in battery life between the S6 and S6 Edge was off. They either posted wrong data, or something wrong happened while testing.
  • MonkeyPaw - Tuesday, March 8, 2016 - link

    I'd agree. When one of the phones goes from being upper middle of the pack on the old benchmark to being dead last--and woefully so--then I would have to wonder if something is really wrong with the new test. I've used the G4 for 6 months and have rarely had battery concerns over a day of "regular" use. I've owned several phones, and the G4 is a trooper.
  • Ryan Smith - Tuesday, March 8, 2016 - link

    We're re-checking the S6 Edge. We've had issues before with that specific phone.

Log in

Don't have an account? Sign up now