Kirin 980 Second Generation NPU - NNAPI Tested

We’ve tested the first generation Kirin NPU back in January in our Kirin 970 review – Back then, we were quite limited in terms of benchmarking tests we were able to run, and I mostly relied on Master Lu’s AI test. This is still around, and we’ve also used it in performance testing Apple’s new A12 neural engine. Unfortunately or the Mate 20’s, the benchmark isn’t compatible yet as it seemingly doesn’t use HiSilicon’s HiAI API on the phones, and falls back to a CPU implementation for processing.

Google had finalised the NNAPI back in Android 8.1, and how most of the time these things go, we first need an API to come out before we can see applications be able to make use of exotic new features such as dedicated neural inferencing engines.

“AI-Benchmark” is a new tool developed by Andrey Ignatov from the Computer Vision Lab at ETH Zürich in Switzerland. The new benchmark application, is as far as I’m aware, one of the first to make extensive use of Android’s new NNAPI, rather than relying on each SoC vendor’s own SDK tools and APIs. This is an important distinction to AIMark, as AI-Benchmark should be better able to accurately represent the resulting NN performance as expected from an application which uses the NNAPI.

Andrey extensive documents the workloads such as the NN models used as well as what their function is, and has also published a paper on his methods and findings.

One thing to keep in mind, is that the NNAPI isn’t just some universal translation layer that is able to magically run a neural network model on an NPU, but the API as well as the SoC vendor’s underlying driver must be able to support the exposed functions and be able to run this on the IP block. The distinction here lies between models which use features that are to date not yet supported by the NNAPI, and thus have to fall back to a CPU implementation, and models which can be hardware accelerated and operate on quantized INT8 or FP16 data. There’s also models relying on FP32 data, and here again depending on the underlying driver this can be either run on the CPU or for example on the GPU.

For the time being, I’m withholding from using the app’s scores and will simply rely on individual comparisons between each test’s inference time. Another presentational difference is that we’ll go through the test results based on the targeted model acceleration type.

AIBenchmark - 1a - The Life - CPU AIBenchmark - 6 - Ms.Universe - CPUAIBenchmark - 7 - Berlin Driving - CPU

The first three CPU tests rely on models which have functions that are not yet supported by the NNAPI. Here what matters for the performance is just the CPU performance as well as the performance response time. The latter I mention, because the workload is transactional in its nature and we are just testing a single image inference. This means that mechanisms such as DVFS and scheduler responsiveness can have a huge impact on the results. This is best demonstrated by the fact that my custom kernel of the Exynos 9810 in the Galaxy S9 performs significantly better than the stock kernel of the same chip of the Note9 in the same above results.

Still, comparing the Huawei P20 Pro (most up to date software stack with Kirin 970) to the new Mate 20, we see some really impressive results of the latter. This both showcases the performance of the A76 cores, as well as possibly improvements in HiSilicon’s DVFS/scheduler.

AIBenchmark - 1c - The Life - INT8AIBenchmark - 3 - Pioneers - INT8AIBenchmark - 5 - Cartoons - INT8

Moving onto the next set of tests, these are based on 8-bit integer quantized NN models. Unfortunately for the Huawei phones, HiSilicons NNAPI drivers still doesn’t seem to expose acceleration to the hardware. Andrey had shared with me that in communications with Huawei, is that they plan to rectify this in a future version of the driver.

Effectively, these tests also don’t use the NPU on the Kirins, and it’s again a showcase of the CPU performance.

On the Qualcomm devices, we see the OnePlus 6 and Pixel 3 far ahead in performance, even compared to the same chipset Galaxy S9+. The reason for this is that both of these phones are running a new updated NNAPI driver from Qualcomm which came along with the Android 9/P BSP update. Here acceleration if facilitated through the HVX DSPs.

AIBenchmark - 1b - The Life - FP16AIBenchmark - 2 - Zoo - FP16AIBenchmark - 4 - Masterpiece - FP16

Moving on to the FP16 tests, here we finally see the Huawei devices make use of the NPU, and post some leading scores both on the old and new generation SoCs. Here the Kirin 980’s >2x NPU improvement finally materialises, with the Mate 20 showcasing a big lead.

I’m not sure if the other devices are running the workloads on the CPU or on the GPU, and the OnePlus 6 seems to suffer from some very odd regression in its NNAPI drivers that makes it perform an order of magnitude worse than other platforms.

AIBenchmark - 8 - Berlin Driving - FP32

Finally on the last FP32 model test, most phones should be running the workload on the CPU again. There’s a more limited improvement on the part of the Mate 20.

Overall, AI-Benchmark was at least able to validate some of Huawei’s NPU performance claims, even though that the real conclusion we should be drawing from these results is that most devices with NNAPI drivers are currently just inherently immature and still very limited in their functionality, which sadly enough again is a sad contrast compared where Apple’s CoreML ecosystem is at today.

I refer back to my conclusion from early in the year regarding the Kirin 970: I still don’t see the NPU as something that obviously beneficial to users, simply because we just don’t have the software applications available to make use of the hardware. I’m not sure to what extent Huawei uses the NPU for camera processing, but other than such first-party use-cases, NPUs currently still seems something mostly inconsequential to device experience

First Cortex-A76 SoC - SPEC2006 Performance & Efficiency System Performance
Comments Locked

141 Comments

View All Comments

  • FunBunny2 - Sunday, November 18, 2018 - link

    "the phone needs to dissipate less heat overall."

    not necessarily. IFF the following time period of lowered power draw is sufficient to dissipate that heat as well as the 'heat debt' from previous spike. the laws of thermodynamics can't be changed just because one wished them to.
  • melgross - Friday, November 16, 2018 - link

    I don’t know how the 980 is outstanding when it does edge past Android SoCs, most of the time, but it’s a really lousy performer compared to the A12. Again, Android devices, and even parts, are being rated on a curve. If you give the A12 a grade of 100 on each rating, the the 980 is no more than a 70, and often a 50, or even a 40. That’s not outstanding, even if it’s much better than the really bad 970 from last year.
  • tuxRoller - Saturday, November 17, 2018 - link

    In spec, the 980 has the best efficiency of all soc.
    Your statement would hold of we were only concerned with the greatest performance.
  • zanon - Monday, November 19, 2018 - link

    What? Doesn't look like that. The SPEC graphs show total energy consumption in J on the left and performance on the right. To get efficiency you need to divide the two right? It's not just absolute energy it's how much energy it takes for each unit of performance. In those tests it's showing the A12 takes 212 J/perf in the first and 107 in the second. The 980 is 368 and 157 respectively. Watts is energy over time, if one SoC can finish a given task faster then the total energy is less even if the peak is more. On a desktop or even tablet there may be cases of more sustained performance (although a high burst chip could just down clock or simply flat out offer better performance and just suggest plugging in), but phone workloads tend to be pretty bursty. Race-to-sleep isn't a bad strategy.
  • Wilco1 - Monday, November 19, 2018 - link

    The graph is very clear - 980 beats all other SoCs on efficiency. The energy bar is the total energy in Joules, so power in Watts (J/s) multiplied by time to finish (s), giving total Joules.
  • tuxRoller - Tuesday, November 20, 2018 - link

    Int: 9480J
    Fp: 5337J
  • s.yu - Friday, November 16, 2018 - link

    I don't really agree about using performance mode for benchmarks, unless battery tests were also run on performance mode.
    Obviously if you use performance mode your device will be more snappy, at the cost of battery life, but since they're not governed under the same mode, the battery and performance benefits are mutually exclusive, you can't have both, so you literally can't have the snappy experience under performance mode for the time length determined by a non-performance mode battery test, therefore testing it this way is not representative of real world experience.
  • Andrei Frumusanu - Friday, November 16, 2018 - link

    Everything, including the battery tests, were in performance mode. Huawei pretty much recommended it to run it like this. It's actually more of an issue that it's not enabled out of the box, and many reviewers actually fell for this new behaviour.
  • s.yu - Friday, November 16, 2018 - link

    Oh! In that case it's not a problem. I saw another site testing everything in non-performance mode and some people were complaining..
  • s.yu - Friday, November 16, 2018 - link

    ...but I'm still curious if changing the app signature would make a difference.

Log in

Don't have an account? Sign up now