Kirin 980 Second Generation NPU - NNAPI Tested

We’ve tested the first generation Kirin NPU back in January in our Kirin 970 review – Back then, we were quite limited in terms of benchmarking tests we were able to run, and I mostly relied on Master Lu’s AI test. This is still around, and we’ve also used it in performance testing Apple’s new A12 neural engine. Unfortunately or the Mate 20’s, the benchmark isn’t compatible yet as it seemingly doesn’t use HiSilicon’s HiAI API on the phones, and falls back to a CPU implementation for processing.

Google had finalised the NNAPI back in Android 8.1, and how most of the time these things go, we first need an API to come out before we can see applications be able to make use of exotic new features such as dedicated neural inferencing engines.

“AI-Benchmark” is a new tool developed by Andrey Ignatov from the Computer Vision Lab at ETH Zürich in Switzerland. The new benchmark application, is as far as I’m aware, one of the first to make extensive use of Android’s new NNAPI, rather than relying on each SoC vendor’s own SDK tools and APIs. This is an important distinction to AIMark, as AI-Benchmark should be better able to accurately represent the resulting NN performance as expected from an application which uses the NNAPI.

Andrey extensive documents the workloads such as the NN models used as well as what their function is, and has also published a paper on his methods and findings.

One thing to keep in mind, is that the NNAPI isn’t just some universal translation layer that is able to magically run a neural network model on an NPU, but the API as well as the SoC vendor’s underlying driver must be able to support the exposed functions and be able to run this on the IP block. The distinction here lies between models which use features that are to date not yet supported by the NNAPI, and thus have to fall back to a CPU implementation, and models which can be hardware accelerated and operate on quantized INT8 or FP16 data. There’s also models relying on FP32 data, and here again depending on the underlying driver this can be either run on the CPU or for example on the GPU.

For the time being, I’m withholding from using the app’s scores and will simply rely on individual comparisons between each test’s inference time. Another presentational difference is that we’ll go through the test results based on the targeted model acceleration type.

AIBenchmark - 1a - The Life - CPU AIBenchmark - 6 - Ms.Universe - CPUAIBenchmark - 7 - Berlin Driving - CPU

The first three CPU tests rely on models which have functions that are not yet supported by the NNAPI. Here what matters for the performance is just the CPU performance as well as the performance response time. The latter I mention, because the workload is transactional in its nature and we are just testing a single image inference. This means that mechanisms such as DVFS and scheduler responsiveness can have a huge impact on the results. This is best demonstrated by the fact that my custom kernel of the Exynos 9810 in the Galaxy S9 performs significantly better than the stock kernel of the same chip of the Note9 in the same above results.

Still, comparing the Huawei P20 Pro (most up to date software stack with Kirin 970) to the new Mate 20, we see some really impressive results of the latter. This both showcases the performance of the A76 cores, as well as possibly improvements in HiSilicon’s DVFS/scheduler.

AIBenchmark - 1c - The Life - INT8AIBenchmark - 3 - Pioneers - INT8AIBenchmark - 5 - Cartoons - INT8

Moving onto the next set of tests, these are based on 8-bit integer quantized NN models. Unfortunately for the Huawei phones, HiSilicons NNAPI drivers still doesn’t seem to expose acceleration to the hardware. Andrey had shared with me that in communications with Huawei, is that they plan to rectify this in a future version of the driver.

Effectively, these tests also don’t use the NPU on the Kirins, and it’s again a showcase of the CPU performance.

On the Qualcomm devices, we see the OnePlus 6 and Pixel 3 far ahead in performance, even compared to the same chipset Galaxy S9+. The reason for this is that both of these phones are running a new updated NNAPI driver from Qualcomm which came along with the Android 9/P BSP update. Here acceleration if facilitated through the HVX DSPs.

AIBenchmark - 1b - The Life - FP16AIBenchmark - 2 - Zoo - FP16AIBenchmark - 4 - Masterpiece - FP16

Moving on to the FP16 tests, here we finally see the Huawei devices make use of the NPU, and post some leading scores both on the old and new generation SoCs. Here the Kirin 980’s >2x NPU improvement finally materialises, with the Mate 20 showcasing a big lead.

I’m not sure if the other devices are running the workloads on the CPU or on the GPU, and the OnePlus 6 seems to suffer from some very odd regression in its NNAPI drivers that makes it perform an order of magnitude worse than other platforms.

AIBenchmark - 8 - Berlin Driving - FP32

Finally on the last FP32 model test, most phones should be running the workload on the CPU again. There’s a more limited improvement on the part of the Mate 20.

Overall, AI-Benchmark was at least able to validate some of Huawei’s NPU performance claims, even though that the real conclusion we should be drawing from these results is that most devices with NNAPI drivers are currently just inherently immature and still very limited in their functionality, which sadly enough again is a sad contrast compared where Apple’s CoreML ecosystem is at today.

I refer back to my conclusion from early in the year regarding the Kirin 970: I still don’t see the NPU as something that obviously beneficial to users, simply because we just don’t have the software applications available to make use of the hardware. I’m not sure to what extent Huawei uses the NPU for camera processing, but other than such first-party use-cases, NPUs currently still seems something mostly inconsequential to device experience

First Cortex-A76 SoC - SPEC2006 Performance & Efficiency System Performance
Comments Locked

141 Comments

View All Comments

  • Lord of the Bored - Saturday, November 17, 2018 - link

    It just looks like a piece of colored glass to me. I'm not convinced there's enough design to copy in the current pocket computer market.
  • wheelman26 - Friday, November 16, 2018 - link

    "Huawei is the only Android manufacturer that is able to take advantage of full vertical integration of silicon and handsets." - There's also Samsung.

    "Huawei’s first phone to push beyond 1080p" - In 2015 the Huawei-Google Nexus 6P had a 1440x250 display.
  • Andrei Frumusanu - Friday, November 16, 2018 - link

    Samsung isn't vertically integrated, S.LSI has to compete with Qualcomm, and the mobile division doesn't seem to care much about what silicon is inside.

    As for the 6P, fair enough and true, but that wasn't by Huawei's decision to feature it.
  • Lolimaster - Friday, November 16, 2018 - link

    Maybe because Huawei are only using in-house SOC vs Samsung with their fail Exynos line.
  • s.yu - Friday, November 16, 2018 - link

    lol, Samsung should just license Andrei's scheduler already ;D
  • Quantumz0d - Friday, November 16, 2018 - link

    "Imitation is the best form of flattery" well I still have to read up on the Kirin but this one. It really is bad statement from AT. Its like one doesn't care about how the uniqueness of anything matters its a shame tbh, free pass just like Pixels.

    Great times we live nowadays. Phones costing over $1000 planned obsolescence and lack of ownership (BL unlock) and lack of usefulness is best.
  • Speedfriend - Friday, November 16, 2018 - link

    Am I reading it correctly that the A12 hits 5W+ at times versus the Kirin at 1.5-3.0 range. Does that mean that Apple is having to dissipate more heat. And in many benchmarks, the A12 has higher W than the A11 despite a move to 7nm, is this a trade off Apple is deciding to make in order to drive performance?
  • Andrei Frumusanu - Friday, November 16, 2018 - link

    Correct, yes, and yes.

    As long as the performance increase is bigger than the power increase, efficiency will still go up. Thermals in this case is just a secondary metric.
  • iwod - Friday, November 16, 2018 - link

    This is bad because A12 is using more power, and I cant imagine I can get any performance improvement next year. I guess there is only so much could be done?
  • Lew Zealand - Friday, November 16, 2018 - link

    The A12 is using less power. It uses more instantaneous power but uses that higher power for a much shorter time to get the work done, so it uses less power overall for the same task and therefore the phone needs to dissipate less heat overall.

Log in

Don't have an account? Sign up now