The Difficulty in Analyzing GPU Turbo

I still haven’t managed to get two identical devices with and without GPU Turbo. The closest practical comparison I was able to make is between the Huawei P20 and the Honor Play. These are two devices that use the same SoC and memory, albeit in different chassis.

The differences between the two phones are not just the GPU Turbo introduction, but the Honor Play also includes a newer Arm Bifrost driver, r12p0, while the P20 had the r9p0 release. Unfortunately no mobile vendor publishes driver release notes, so we can’t differentiate between possible improvements on the GPU driver side, and actual improvements that GPU Turbo makes.


Huawei P20 (no GPU Turbo)


Honor Play (GPU Turbo)

For raw frame rate numbers, it was extremely hard to tell the two phones apart. PUBG tops out at 40 FPS as well, although it should be noted that we could have invested a lot more time inspecting jitter and just how noticeable that would be in practice, but one thing that can be very empirically be measured is power consumption.

Here the Honor Play seemingly did have an advantage, coming in at ~3.9W while rendering the above scene. This was a tad less than the P20’s ~4.7W. These figures are total device power, and obviously the screen and rest of device components will be different between the two models. It does however represent a 15% difference in power, although to be clear we can't rule out the possibility that they could be different bins; i.e. they have different power/voltage characteristics as per random manufacturing variance, which is common in the space.

Huawei has quoted data for the Kirin 980:

Still, it does very much look like GPU Turbo has an efficiency advantage, however again a 10% figure as presented during the Kirin 980 keynote seems to be a lot closer to reality than the promised 30% marketing materials.

GPU Turbo Is Real, Just Be Wary of Marketing Numbers

One thing that should not be misunderstood in this article is that GPU Turbo itself is not just a marketing ploy, but rather a very real and innovative solution that tries to address the weaknesses of the current generation Kirin chipsets. Kirin still sits well behind both the performance and efficiency of Snapdragon-based Adreno graphics, and because Huawei cannot license Adreno, it has to try and make the best of what it has, aside from dedicating more die space to their GPUs.

However much of the technical merit of GPU Turbo has been largely overshadowed by quite overzealous marketing claims that are nothing short of misleading. More on this on the next page.

By nature of it being a software solution, it is something that augments the hardware, and if the hardware can’t deliver, then so won’t the software. Here a lot of the confusion and misleading material can be directly attributed to the way the Honor Play was presented to the public. Reality is, even with GPU Turbo, the Honor Play is still not competitive with Snapdragon 845 devices, even when it wants to portray itself as such. Here, the differences in the silicon are just too great to be overcome by a software optimization, not matter how innovative the new mechanism is.

The Detailed Explanation of GPU Turbo Problems with PUBG: Not All GPUs Render Equally
Comments Locked

64 Comments

View All Comments

  • jjj - Tuesday, September 4, 2018 - link

    Good first step but could be expanded well beyond the GPU and the entire system built around it.
    Been thinking about this for some years and could lead to very different hardware if you have an NPU manage everything.
  • ZolaIII - Tuesday, September 4, 2018 - link

    NNPU didn't even earn to eat hire. No one will do it better than your self. Switching off two big cores does much more than all this fuss about GPU turbo.
  • mode_13h - Tuesday, September 4, 2018 - link

    Not to agree or disagree with your point, but I think you mean "here" instead of "hire".
  • mode_13h - Tuesday, September 4, 2018 - link

    You mean like this?

    https://www.phoronix.com/scan.php?page=news_item&a...
  • mode_13h - Tuesday, September 4, 2018 - link

    Anyway, I'm thinking a deep learning model could probably do a better job at managing core clock speeds and perhaps even deciding whether to schedule certain tasks on big vs. little cores.
  • ZolaIII - Wednesday, September 5, 2018 - link

    We (not AI) need to evolve scheduler logic and add SMP affinity flags to processes. Only then game's can begin. Here goes the rain again. By the way thanks for that.
  • mr_tawan - Tuesday, September 4, 2018 - link

    Does it raytrace?
  • Manch - Wednesday, September 5, 2018 - link

    I think the question that should be asked is:

    Can it raytrace Crysis?
  • Lord of the Bored - Wednesday, September 5, 2018 - link

    I'd settle for raytracing FEAR.
  • sing_electric - Tuesday, September 4, 2018 - link

    This presents a quandary for benchmarking/reviewing: In the past, if a company released drivers that changed the performance of the device while running a specific app, we'd all call it "cheating" if that app was a benchmark. However, by extending the "cheats" to other apps, the user sees real benefits, even though it's the same behavior.

    It also means that performance numbers have to be taken with even more salt, because the performance on a popular app which has been "Turboed" by Huawei might not be indicative of the performance you see if you only say, play less popular games that Huawei hasn't profiled.

Log in

Don't have an account? Sign up now