The Difficulty in Analyzing GPU Turbo

I still haven’t managed to get two identical devices with and without GPU Turbo. The closest practical comparison I was able to make is between the Huawei P20 and the Honor Play. These are two devices that use the same SoC and memory, albeit in different chassis.

The differences between the two phones are not just the GPU Turbo introduction, but the Honor Play also includes a newer Arm Bifrost driver, r12p0, while the P20 had the r9p0 release. Unfortunately no mobile vendor publishes driver release notes, so we can’t differentiate between possible improvements on the GPU driver side, and actual improvements that GPU Turbo makes.


Huawei P20 (no GPU Turbo)


Honor Play (GPU Turbo)

For raw frame rate numbers, it was extremely hard to tell the two phones apart. PUBG tops out at 40 FPS as well, although it should be noted that we could have invested a lot more time inspecting jitter and just how noticeable that would be in practice, but one thing that can be very empirically be measured is power consumption.

Here the Honor Play seemingly did have an advantage, coming in at ~3.9W while rendering the above scene. This was a tad less than the P20’s ~4.7W. These figures are total device power, and obviously the screen and rest of device components will be different between the two models. It does however represent a 15% difference in power, although to be clear we can't rule out the possibility that they could be different bins; i.e. they have different power/voltage characteristics as per random manufacturing variance, which is common in the space.

Huawei has quoted data for the Kirin 980:

Still, it does very much look like GPU Turbo has an efficiency advantage, however again a 10% figure as presented during the Kirin 980 keynote seems to be a lot closer to reality than the promised 30% marketing materials.

GPU Turbo Is Real, Just Be Wary of Marketing Numbers

One thing that should not be misunderstood in this article is that GPU Turbo itself is not just a marketing ploy, but rather a very real and innovative solution that tries to address the weaknesses of the current generation Kirin chipsets. Kirin still sits well behind both the performance and efficiency of Snapdragon-based Adreno graphics, and because Huawei cannot license Adreno, it has to try and make the best of what it has, aside from dedicating more die space to their GPUs.

However much of the technical merit of GPU Turbo has been largely overshadowed by quite overzealous marketing claims that are nothing short of misleading. More on this on the next page.

By nature of it being a software solution, it is something that augments the hardware, and if the hardware can’t deliver, then so won’t the software. Here a lot of the confusion and misleading material can be directly attributed to the way the Honor Play was presented to the public. Reality is, even with GPU Turbo, the Honor Play is still not competitive with Snapdragon 845 devices, even when it wants to portray itself as such. Here, the differences in the silicon are just too great to be overcome by a software optimization, not matter how innovative the new mechanism is.

The Detailed Explanation of GPU Turbo Problems with PUBG: Not All GPUs Render Equally
Comments Locked

64 Comments

View All Comments

  • eastcoast_pete - Tuesday, September 4, 2018 - link

    Thanks Andrei! I agree that this is, in principle, an interesting way to adjust power use and GPU performance in a finer-grained way than otherwise implemented. IMO, it also seems to be an attempt to push HiSlilicon's AI core, as its other benefits are a bit more hidden for now (for lack of a better word). Today's power modes (at least on Android) are a bit all-high or all-low, so anything finer grained is welcome. Question: how long can the "turbo" turbo for before it gets a bit warm for the SoC? Did Huawei say anything about thermal limitations? I assume the AI is adjusting according to outside temperature and SoC to outside temperature differential?

    Regardless of AI-supported or not, I frequently wish I could more finely adjust power profiles for CPU, GPU and memory and make choices for my phone myself, along the lines of: 1. Strong, short CPU and GPU bursts enabled, otherwise balanced, to account for thermals and battery use (most everyday use, no gaming), 2. No burst, energy saver all round (need to watch my battery use) and 3. High power mode limited only by thermals (gaming mode), but allows to vary power allocations to CPU and GPU cores. An intelligent management and power allocation would be great for all these, but especially 3.
  • Ian Cutress - Tuesday, September 4, 2018 - link

    GPU Turbo also has a CPU mode, if there isn't an NPU present. That's enabling Huawei to roll it out to older devices. The NPU does make it more efficient though.

    In your mode 3, battery life is still a concern. Pushing the power causes the efficiency to decrease as the hardware is pushed to the edge of its capabilities. The question is how much of a trade off is valid? Thermals can also ramp a lot too - you'll hit thermal skin temp limits a lot earlier than you think. That also comes down to efficiency and design.
  • kb9fcc - Tuesday, September 4, 2018 - link

    Sounds reminiscent of the days when nVidia and ATI would cook some code into their drivers that could detect when certain games and/or benchmarking tools were being run and tweak the performance to return results that favored their GPU.
  • mode_13h - Tuesday, September 4, 2018 - link

    Who's to say Nvidia isn't already doing a variation of GPU Turbo, in their game-ready drivers? The upside is less, with a desktop GPU, but perhaps they could do things like preemptively spike the core clock speed and dip the memory clock, if they knew the next few frames would be shader-limited but with memory bandwidth to spare.
  • Kvaern1 - Tuesday, September 4, 2018 - link

    I don't suppose China has a law that punishes partyboss owned corporation for making wild dishonest claims.
  • darckhart - Tuesday, September 4, 2018 - link

    ehhh it's getting hype now, but I bet it will only be supported on a few games/apps. it's a bit like nvidia's game ready drivers: sure the newest big name game releases get support (but only for newer gpu) and then what happens when the game updates/patches? will the team keep the game in the library and let the AI keep testing so as to keep it optimized? how many games will be added to the library? how often? which SoC will continue to be supported?
  • mode_13h - Tuesday, September 4, 2018 - link

    Of course, if they just operated a cloud service that automatically trained models based on automatically-uploaded performance data, then it could easily scale to most apps on most phones.
  • Ratman6161 - Tuesday, September 4, 2018 - link

    meh....only for games? So what. Yes, I know a lot of people reading this article care about games, but for those of us who don't this is meaningless. But looking at it as a gamer might, it still seems pretty worthless. Per soc and per game? That's going to take constant updates to keep up with the latest releases. And how long can they keep that up? Personally if I were that interested in games, I'd just buy something that's better at gaming to begin with.
  • mode_13h - Tuesday, September 4, 2018 - link

    See my point above.

    Beyond that, the benefits of a scheme like this, even on "something that's better at gaming to begin with", is longer battery life and less heat. Didn't you see the part where it clocks everything just high enough to hit 60 fps? That's as fast as most phone's displays will update, so any more and you're wasting power.
  • mode_13h - Tuesday, September 4, 2018 - link

    I would add that the biggest benefit is to be had by games, since they use the GPU more heavily than most other apps. They also have an upper limit on how fast they need to run.

    However, a variation on this could be used to manage the speeds of different CPU cores and the distribution of tasks between them.

Log in

Don't have an account? Sign up now