The Minor Issue of Overzealous Marketing

As mentioned earlier in the piece, the most common numbers from Huawei and Honor about the new technology follow the same pattern: GPU Turbo is going to offer up to 60% extra performance, and 30% better power consumption. Since launch, out of all the marketing materials we have seen, there is exactly one instance where either company expands on these figures. This is in the footnotes of Honor Play’s English global product page, explaining the context of the 60%/30% numbers:

Honor Play's Product Website GPU Turbo Explanation

Here is what that tiny bullet point says:

*2 The GPU Turbo is a graphics processing technology that is based on Kirin chips and incorporates mutualistic software and hardware interaction. And it supports some particular games. 
Results are based on comparison with the previous generation chip, the Kirin 960.

This is a big red flag. Normally when comparing a new technology, the performance difference should be quoted in an off/on state. So it shouldn’t be too complicated to see as to the fact that using the Kirin 960 as the base result is a pretty massive issue. It means that the marketing materials are mixing up its claims – values that are explicitly being attributed to GPU Turbo, a software technology, are mixed with silicon improvements between two generations of chipsets.

The honest comparison should be the Kirin 970 with GPU Turbo off and the Kirin 970 with GPU Turbo on. In this case, the baseline result is with the Kirin 960 with no GPU Turbo, compared against the latest Kirin 970 with GPU Turbo on.

For our readers unfamiliar with the generational improvements of the new Kirin 970 chipset, I recommend referring back to our in-depth article review of chipset released back in January. In terms of advancements, the Kirin 970 brings a new Mali G72MP12 GPU running at 747MHz, manufactured on a new TSMC 10nm process. This represented quite an improvement to the 16nm manufactured Kirin 960 which featured a Mali G71MP8 at up to 1037MHz.

Kirin 970 AnandTech Kirin 960
TSMC 10FF Mfg. Process TSMC 16FFC
4xA73 @ 2.36 GHz
4xA53 @ 1.84 GHz
CPU 4xA73 @ 2.36 GHz
4xA53 @ 1.84 GHz
Mali-G72MP12 @ 746 MHz GPU Mali-G71MP8 @ 1035 MHz
Yes NPU No
Cat 18/13 Modem Cat 12/13

Furthermore, the Kirin 960’s GPU performance and efficiency was extremely problematic, showcasing some of the worst behavior we’ve ever seen in any smartphone ever released. We’re not going to go back as to why this happened, but it was a competitive blow to the Kirin 960.

Now the Kirin 970 improved from these low figures, as we’ve shown in our reviews. But the 60% performance improvements and 30% power improvement mentioned for GPU Turbo, while in isolation might sound impressive, aren't nearly as impressive once we know what they're based on. By being relative to the badly performing Kirin 960, it completely changes the meaning. Users that enable GPU Turbo on their devices will not experience a 60%/30% difference in performance.

This is also in the face of Huawei’s own data presented throughout the lifetime of GPU Turbo. Quoting 60%/30% makes for impressive headlines (regardless of how honest they are), however even Huawei’s own analysis shows that 60%/30% are wildly optimistic:

Ultimately Huawei presented the 60%/30% figures as a differential between GPU Turbo On/Off. If anyone was expecting that on their device, then they would be sorely disappointed. The fact that the companies obfuscated the crucial comparison point of the Kirin 960 is almost unreal in that respect.

Also on that image above, we have to criticize quite heavily on the fact that those bar charts are misrepresenting all the gains: the 3 FPS gain in PUBG is shown as a 25% gain. Companies feel the need to misrepresent the true growth in values like this because it makes for a more impressive graph, rather than adhere to the standard of starting graphs at zero.

Why Using The Kirin 960 Is An Issue: Starting With A Low Bar

Going back to our GPU power efficiency tables measured in GFXBench Manhattan 3.1 and T-Rex, we put the two chipsets back into context:

GFXBench Manhattan 3.1 Offscreen Power Efficiency
(System Active Power)
AnandTech Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
Galaxy S9+ (Snapdragon 845) 10LPP 61.16 5.01 11.99 fps/W
Galaxy S9 (Exynos 9810) 10LPP 46.04 4.08 11.28 fps/W
Galaxy S8 (Snapdragon 835) 10LPE 38.90 3.79 10.26 fps/W
LeEco Le Pro3 (Snapdragon 821) 14LPP 33.04 4.18 7.90 fps/W
Galaxy S7 (Snapdragon 820) 14LPP 30.98 3.98 7.78 fps/W
Huawei Mate 10 (Kirin 970) 10FF 37.66 6.33 5.94 fps/W
Galaxy S8 (Exynos 8895) 10LPE 42.49 7.35 5.78 fps/W
Galaxy S7 (Exynos 8890) 14LPP 29.41 5.95 4.94 fps/W
Meizu PRO 5 (Exynos 7420) 14LPE 14.45 3.47 4.16 fps/W
Nexus 6P (Snapdragon 810 v2.1) 20Soc 21.94 5.44 4.03 fps/W
Huawei Mate 8 (Kirin 950) 16FF+ 10.37 2.75 3.77 fps/W
Huawei Mate 9 (Kirin 960) 16FFC 32.49 8.63 3.77 fps/W
Huawei P9 (Kirin 955) 16FF+ 10.59 2.98 3.55 fps/W
GFXBench T-Rex Offscreen Power Efficiency
(System Active Power)
AnandTech Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
Galaxy S9+ (Snapdragon 845) 10LPP 150.40 4.42 34.00 fps/W
Galaxy S9 (Exynos 9810) 10LPP 141.91 4.34 32.67 fps/W
Galaxy S8 (Snapdragon 835) 10LPE 108.20 3.45 31.31 fps/W
LeEco Le Pro3 (Snapdragon 821) 14LPP 94.97 3.91 24.26 fps/W
Galaxy S7 (Snapdragon 820) 14LPP 90.59 4.18 21.67 fps/W
Galaxy S8 (Exynos 8895) 10LPE 121.00 5.86 20.65 fps/W
Galaxy S7 (Exynos 8890) 14LPP 87.00 4.70 18.51 fps/W
Huawei Mate 10 (Kirin 970) 10FF 127.25 7.93 16.04 fps/W
Meizu PRO 5 (Exynos 7420) 14LPE 55.67 3.83 14.54 fps/W
Nexus 6P (Snapdragon 810 v2.1) 20Soc 58.97 4.70 12.54 fps/W
Huawei Mate 8 (Kirin 950) 16FF+ 41.69 3.58 11.64 fps/W
Huawei P9 (Kirin 955) 16FF+ 40.42 3.68 10.98 fps/W
Huawei Mate 9 (Kirin 960) 16FFC 99.16 9.51 10.42 fps/W

So while the Kirin 970 is an advancement and improvement over the 960 – in the context of the competition, it still has trouble holding up with this generation’s Exynos and Snapdragon.

The key point I’m trying to make here in the context of GPU Turbo claims, is that the 60%/30% figures are very much unrealistic and extremely misleading to users. If Huawei and Honor are not clear about the baseline comparisons, the companies' own numbers can never be trusted in future announcements again.

How Much Does GPU Turbo Actually Provide?

On Friday Huawei CEO Richard Yu announced the new Kirin 980, and some of the presentation slides addressed the new mechanism, showcasing a more concrete figure of the GPU Turbo effects on the newly released chipset:

Here, the actual performance improvement is rather minor because the workload is V-sync capped and the GPU doesn’t have issues in that regard, however the power improvements should be still representative. Here the actual power improvement was 10% - something that’s a lot more reasonable and believable improvement that can be attributed to software.

Problems with PUBG: Not All GPUs Render Equally Conclusion: Still A Plus, But
Comments Locked

64 Comments

View All Comments

  • eastcoast_pete - Tuesday, September 4, 2018 - link

    Thanks Andrei! I agree that this is, in principle, an interesting way to adjust power use and GPU performance in a finer-grained way than otherwise implemented. IMO, it also seems to be an attempt to push HiSlilicon's AI core, as its other benefits are a bit more hidden for now (for lack of a better word). Today's power modes (at least on Android) are a bit all-high or all-low, so anything finer grained is welcome. Question: how long can the "turbo" turbo for before it gets a bit warm for the SoC? Did Huawei say anything about thermal limitations? I assume the AI is adjusting according to outside temperature and SoC to outside temperature differential?

    Regardless of AI-supported or not, I frequently wish I could more finely adjust power profiles for CPU, GPU and memory and make choices for my phone myself, along the lines of: 1. Strong, short CPU and GPU bursts enabled, otherwise balanced, to account for thermals and battery use (most everyday use, no gaming), 2. No burst, energy saver all round (need to watch my battery use) and 3. High power mode limited only by thermals (gaming mode), but allows to vary power allocations to CPU and GPU cores. An intelligent management and power allocation would be great for all these, but especially 3.
  • Ian Cutress - Tuesday, September 4, 2018 - link

    GPU Turbo also has a CPU mode, if there isn't an NPU present. That's enabling Huawei to roll it out to older devices. The NPU does make it more efficient though.

    In your mode 3, battery life is still a concern. Pushing the power causes the efficiency to decrease as the hardware is pushed to the edge of its capabilities. The question is how much of a trade off is valid? Thermals can also ramp a lot too - you'll hit thermal skin temp limits a lot earlier than you think. That also comes down to efficiency and design.
  • kb9fcc - Tuesday, September 4, 2018 - link

    Sounds reminiscent of the days when nVidia and ATI would cook some code into their drivers that could detect when certain games and/or benchmarking tools were being run and tweak the performance to return results that favored their GPU.
  • mode_13h - Tuesday, September 4, 2018 - link

    Who's to say Nvidia isn't already doing a variation of GPU Turbo, in their game-ready drivers? The upside is less, with a desktop GPU, but perhaps they could do things like preemptively spike the core clock speed and dip the memory clock, if they knew the next few frames would be shader-limited but with memory bandwidth to spare.
  • Kvaern1 - Tuesday, September 4, 2018 - link

    I don't suppose China has a law that punishes partyboss owned corporation for making wild dishonest claims.
  • darckhart - Tuesday, September 4, 2018 - link

    ehhh it's getting hype now, but I bet it will only be supported on a few games/apps. it's a bit like nvidia's game ready drivers: sure the newest big name game releases get support (but only for newer gpu) and then what happens when the game updates/patches? will the team keep the game in the library and let the AI keep testing so as to keep it optimized? how many games will be added to the library? how often? which SoC will continue to be supported?
  • mode_13h - Tuesday, September 4, 2018 - link

    Of course, if they just operated a cloud service that automatically trained models based on automatically-uploaded performance data, then it could easily scale to most apps on most phones.
  • Ratman6161 - Tuesday, September 4, 2018 - link

    meh....only for games? So what. Yes, I know a lot of people reading this article care about games, but for those of us who don't this is meaningless. But looking at it as a gamer might, it still seems pretty worthless. Per soc and per game? That's going to take constant updates to keep up with the latest releases. And how long can they keep that up? Personally if I were that interested in games, I'd just buy something that's better at gaming to begin with.
  • mode_13h - Tuesday, September 4, 2018 - link

    See my point above.

    Beyond that, the benefits of a scheme like this, even on "something that's better at gaming to begin with", is longer battery life and less heat. Didn't you see the part where it clocks everything just high enough to hit 60 fps? That's as fast as most phone's displays will update, so any more and you're wasting power.
  • mode_13h - Tuesday, September 4, 2018 - link

    I would add that the biggest benefit is to be had by games, since they use the GPU more heavily than most other apps. They also have an upper limit on how fast they need to run.

    However, a variation on this could be used to manage the speeds of different CPU cores and the distribution of tasks between them.

Log in

Don't have an account? Sign up now