Kirin 980 Second Generation NPU - NNAPI Tested

We’ve tested the first generation Kirin NPU back in January in our Kirin 970 review – Back then, we were quite limited in terms of benchmarking tests we were able to run, and I mostly relied on Master Lu’s AI test. This is still around, and we’ve also used it in performance testing Apple’s new A12 neural engine. Unfortunately or the Mate 20’s, the benchmark isn’t compatible yet as it seemingly doesn’t use HiSilicon’s HiAI API on the phones, and falls back to a CPU implementation for processing.

Google had finalised the NNAPI back in Android 8.1, and how most of the time these things go, we first need an API to come out before we can see applications be able to make use of exotic new features such as dedicated neural inferencing engines.

“AI-Benchmark” is a new tool developed by Andrey Ignatov from the Computer Vision Lab at ETH Zürich in Switzerland. The new benchmark application, is as far as I’m aware, one of the first to make extensive use of Android’s new NNAPI, rather than relying on each SoC vendor’s own SDK tools and APIs. This is an important distinction to AIMark, as AI-Benchmark should be better able to accurately represent the resulting NN performance as expected from an application which uses the NNAPI.

Andrey extensive documents the workloads such as the NN models used as well as what their function is, and has also published a paper on his methods and findings.

One thing to keep in mind, is that the NNAPI isn’t just some universal translation layer that is able to magically run a neural network model on an NPU, but the API as well as the SoC vendor’s underlying driver must be able to support the exposed functions and be able to run this on the IP block. The distinction here lies between models which use features that are to date not yet supported by the NNAPI, and thus have to fall back to a CPU implementation, and models which can be hardware accelerated and operate on quantized INT8 or FP16 data. There’s also models relying on FP32 data, and here again depending on the underlying driver this can be either run on the CPU or for example on the GPU.

For the time being, I’m withholding from using the app’s scores and will simply rely on individual comparisons between each test’s inference time. Another presentational difference is that we’ll go through the test results based on the targeted model acceleration type.

AIBenchmark - 1a - The Life - CPU AIBenchmark - 6 - Ms.Universe - CPUAIBenchmark - 7 - Berlin Driving - CPU

The first three CPU tests rely on models which have functions that are not yet supported by the NNAPI. Here what matters for the performance is just the CPU performance as well as the performance response time. The latter I mention, because the workload is transactional in its nature and we are just testing a single image inference. This means that mechanisms such as DVFS and scheduler responsiveness can have a huge impact on the results. This is best demonstrated by the fact that my custom kernel of the Exynos 9810 in the Galaxy S9 performs significantly better than the stock kernel of the same chip of the Note9 in the same above results.

Still, comparing the Huawei P20 Pro (most up to date software stack with Kirin 970) to the new Mate 20, we see some really impressive results of the latter. This both showcases the performance of the A76 cores, as well as possibly improvements in HiSilicon’s DVFS/scheduler.

AIBenchmark - 1c - The Life - INT8AIBenchmark - 3 - Pioneers - INT8AIBenchmark - 5 - Cartoons - INT8

Moving onto the next set of tests, these are based on 8-bit integer quantized NN models. Unfortunately for the Huawei phones, HiSilicons NNAPI drivers still doesn’t seem to expose acceleration to the hardware. Andrey had shared with me that in communications with Huawei, is that they plan to rectify this in a future version of the driver.

Effectively, these tests also don’t use the NPU on the Kirins, and it’s again a showcase of the CPU performance.

On the Qualcomm devices, we see the OnePlus 6 and Pixel 3 far ahead in performance, even compared to the same chipset Galaxy S9+. The reason for this is that both of these phones are running a new updated NNAPI driver from Qualcomm which came along with the Android 9/P BSP update. Here acceleration if facilitated through the HVX DSPs.

AIBenchmark - 1b - The Life - FP16AIBenchmark - 2 - Zoo - FP16AIBenchmark - 4 - Masterpiece - FP16

Moving on to the FP16 tests, here we finally see the Huawei devices make use of the NPU, and post some leading scores both on the old and new generation SoCs. Here the Kirin 980’s >2x NPU improvement finally materialises, with the Mate 20 showcasing a big lead.

I’m not sure if the other devices are running the workloads on the CPU or on the GPU, and the OnePlus 6 seems to suffer from some very odd regression in its NNAPI drivers that makes it perform an order of magnitude worse than other platforms.

AIBenchmark - 8 - Berlin Driving - FP32

Finally on the last FP32 model test, most phones should be running the workload on the CPU again. There’s a more limited improvement on the part of the Mate 20.

Overall, AI-Benchmark was at least able to validate some of Huawei’s NPU performance claims, even though that the real conclusion we should be drawing from these results is that most devices with NNAPI drivers are currently just inherently immature and still very limited in their functionality, which sadly enough again is a sad contrast compared where Apple’s CoreML ecosystem is at today.

I refer back to my conclusion from early in the year regarding the Kirin 970: I still don’t see the NPU as something that obviously beneficial to users, simply because we just don’t have the software applications available to make use of the hardware. I’m not sure to what extent Huawei uses the NPU for camera processing, but other than such first-party use-cases, NPUs currently still seems something mostly inconsequential to device experience

First Cortex-A76 SoC - SPEC2006 Performance & Efficiency System Performance
Comments Locked

141 Comments

View All Comments

  • name99 - Friday, November 16, 2018 - link

    Please don't treat me like a child; read my comments and treat me accordingly.
    DDR as a rate (transactions) DOUBLE the clock was only relevant to the transition from SDR to DDR.
    What do you think is the difference between DDR and DDR2, or DDDR2 and DDR3, or DDR3 and DDR4?

    Part of the problem seems to be that no-one can agree on what "clock" actually refers to.
    There are at least two clocks of interest - the internal DRM clock, and the external bus clock.

    As far as I can tell:
    - DDR doubled the transfer rate over the external bus. (External bus, internal clock the same, just like SDR). Internal clock is ~100..200MHz
    - DDR2 runs the external clock at twice the internal clock.
    - DDR3 runs the external clock at 4x the internal clock. (still running from ~100 to 266MHz)

    - At DDR4 I'm no longer sure (which is part of the whole reason for this confusion).
    The obvious assumption is that the external clock is now run at 8x the internal clock; but that does NOT seem to be the case. Rather what's defined as the internal clock is now run twice as fast, so that the internal:external multiplier is still 8x, but the internal clock speeds now range from ~200 to ~400MHz.

    Meanwhile, is LPDDR following the same pattern at each generation? I haven't a clue, and can find no useful answer on the internet.
  • anonomouse - Friday, November 16, 2018 - link

    I think the discussion of internal/external clock ratios is somewhat orthogonal to your originally posed question: the clock that is being advertised is the IO clock for the LPDDR4 modules, since they're telling you what the peak bandwidth of the module is. Commands are on the same clock but SDR instead of DDR and each command takes multiple cycles. Don't quite see what is so confusing about the 2133MHz clock though, since the way they are describing it is entirely accurate and is no different from previous practices. DDR4-3200 has a 1600MHz IO clock too.

    Also worth remembering that while pin speed is higher, individual LPDDR4 channels are 16bits vs 64bits, so it's not like the actual bandwidth is necessarily higher. This phone has 4-channels to get 34.1GB/s, which is the same bandwidth you'd get from a 2-channel DDR4-2133 system, but much more feasible to scale up capacity/channels/clocks on DDR4.
  • frostyfiredude - Saturday, November 17, 2018 - link

    Look, I have no idea where you're going with all the internal clocks and DDR4, DDR3, etc differences so I'm not commenting. But, here are the facts on the Mate 20 Pro:

    The DRAM - Memory controller interface is clocked at 2133Mhz.
    Due to being of the DDR family, 2 bits are transferred per clock.
    Together, this mean 4266Mbits/s transfer rate per pin.
    Finally its a 64-bit bus, meaning 64 data pins. 273024Mbits/s aggregate bandwidth.
    That breaks down to 34.1GB/s.
    In standard DIMM form on your favourite PC parts store, this is advertised as DDR4-4266 or PC4-34100.
  • ternnence - Friday, November 16, 2018 - link

    closer from ram to cpu core, higher frequency ram could get. HBM is another example.
  • eastcoast_pete - Friday, November 16, 2018 - link

    @Andrei: thanks for this in-depth review! I wonder how S.LSI takes your pessimistic take on their M4; it seems they have a hard time backing away from their in-house design that doesn't seem to cut it. Also, I appreciate that you're live-updating the review with additional information; I trust reviews that add and update their findings as new data become available much more than the one-and-done style.
    Question: Did you have a chance to ask Huawei along those lines: "What is your commitment to OS updates, how quickly will you make them available, and for how many years?". Having been burned by Huawei a few years ago (promised OS update never arrived), I am still a bit once burned, twice shy. These devices are pricey, and if Huawei wants to take on Apple at Apple prices, they should mirror Apple's commitment to provide OS updates for several years.
  • rayhydro - Friday, November 16, 2018 - link

    I'm using the mate 20 now, and I can confirm it has the same stereo setup as the mate20pro. maybe your unit's top tweeter is faulty ?
  • rayhydro - Friday, November 16, 2018 - link

    I tested both side by side in the stores, both model's stereo speakers sound pretty much the same or extremely similar to my ears. I opted for mate 20 due to it's smaller notch and headphone jack :D
  • lucam - Friday, November 16, 2018 - link

    I still think Mali GPU is a garbage GPU
  • Lolimaster - Friday, November 16, 2018 - link

    To put it simply, at the same year, they're 1 year behind.

    Mali G76 MP10 ~ Adreno 540 (a bit faster on the mali side, maybe)
  • lucam - Saturday, November 17, 2018 - link

    Adreno is always been better. Still think Imagination has the best solution tho

Log in

Don't have an account? Sign up now