NPU Performance Tested

To test the performance of the NPU we need a benchmark which currently targets all of the various vendor APIs. Unfortunately at this stage short of developing our own implementation the choices are scarce, but luckily there is one: Popular Chinese benchmark suite Master Lu recently introduced an AI benchmark implementing both HiSilicon’s HiAI as well as Qualcomm’s SNPE frameworks. The benchmarks implements three different neural network models: VGG16, InceptionV3 as well as ResNet34. The input dataset are 100 images which are a subset of the ImageNet reference database. As a fall-back the app implements the TensorFlow inferencing library to run on the CPU. I’ve ran the performance figures on the Mate 10 Pro, Mate 9 as well as two Snapdragon 835 (Pixel 2 XL & V30) devices respectively running on the CPU as well as the Hexagon DSP.

Similarly to the SPEC2006 results I chose to use a more complex graph to better showcase the three dimensions of average power (W), efficiency (mJ/inference) as well as absolute performance (fps / inferences per second).

First thing we notice from the graph is that we can observe an order of magnitude difference in performance between the NPU and CPU implementations. Running the networks as they are on the CPUs we’re not able to exceed 1-2fps and we do so at very heavy CPU power consumption. Both the Snapdragon 835 as well as the Kirin 960 CPUs struggle with the workloads with average power exceeding sustainable workloads.

Qualcomm’s Hexagon DSP is able to improve on the CPU performance by a factor of 5-8x. But Huawei’s NPU performance figures are again several factors above that, showcasing up to a 4x lead in ResNet34. The reason for the different performance ratio differences between the different models is their design. Convolutional layers are heavily parallelisable whilst the polling and fully connected layers of the models must use more serial processing steps. ResNet in particular makes use of a larger percentage of convolution processing for a single inference and thus is able to achieve a higher utilization rate of the Kirin NPU.

In terms of power efficiency we’re very near to Huawei’s claims of up to a 50x improvement. This is the key characteristic that will enable CNNs to be used in real-world use-cases. I was quite surprised to see Qualcomm’s DSP reach similar efficiency levels as Huawei’s NPU – albeit at 1/3rd to 1/4th of the performance. This should bode quite well in terms of the Snapdragon 845’s Hexagon 685 which promises up to a 3x increase in performance.

I wanted to take the opportunity to make a rant about Google’s Pixel 2: I was able to actually run the benchmark on the Snapdragon 835’s CPU because the Pixel 2 devices lacked support for the SNPE framework. This was in a sense maybe both expected as well as unexpected. With the introduction of the NN API in Android 8.1, which the Pixel 2 phones support and use acceleration through the dedicated Pixel Visual Core SoC, it’s natural that Google would want to push usage of Android’s standard APIs. But on the other hand this is also a limitation on the capabilities of the phone by the OEM vendor which I can’t help but compare to the decision by Google to by default omit OpelCL in Android. This is a decision which in my eyes has heavily stifled the ecosystem and is why we don’t see more GPU accelerated compute workloads, out of which CNNs could have been one.

While we can’t run the Master Lu AI test on an iPhone, HiSilicon did publish some slides with reported internally numbers we can try to correlate. Based on the models included in the slide, the Apple A11 neural network IP’s performance should land somewhere slightly ahead of the Snapdragon 835’s DSP but still far behind the Kirin NPU, but again we can't independently verify these figures due to lack of a fitting iOS benchmark we can run ourselves.

Of course the important question is, what is this all good for? HiSilicon discloses that one use-case being used is noise reduction via CNN processing, and thus is able to increase voice recognition rate in heavy traffic from 80% to 92%.

The other most publicised use-case is the implementation in the camera app. The Mate 10’s camera makes use of the NPU to run inferencing to recognize different scenarios and optimize the camera settings based on pre-sets for those scenarios. The Mate 10 comes with a translation app which was developed with Microsoft, which is able to use the NPU for accelerated offline translation, and this was definitely the single most impressive usage for me. Inside the built-in gallery application we also see the use of image classification to create a new section where pictures are organized by content type. The former scenarios where the SoC is doing live inferencing on a media stream such as the camera feed is also the use-case where HiSilicon has an advantage over Qualcomm as employs both a DSP and the NPU whereas Snapdragon SoCs have to share the DSP resources between vision processing and neural network inferencing workloads.

Oddly enough the Kirin 970 has sort of double the silicon IP capable of running neural network efficiently as its vision pipeline also includes a Cadence Tensilica Vision P6 DSP which should be in the same performance class as Qualcomm’s Hexagon 680 DSP, but is currently not exposed for user applications.

While the Mate 10 does make some use of the NPU it’s hard to argue that it’s a definitive differentiating factor for the end-user. Currently neural network usage in mobile doesn’t seem to have the same killer-applications that they have in automotive and security camera sectors. Again this is due to the ecosystem being its early days and the Mate 10 among the first devices to actually offer such a dedicated acceleration block. It’s arguable if it’s worth it for the Kirin 970 to have implemented such a piece and Huawei is very open about the fact that it’s reaching out to developers to try and find more use-cases for the silicon, and at least Huawei should be lauded for innovating with something new.

Huawei/Microsoft's translation app seemed to be the most distinguished experience on the Mate 10 so maybe there’s more non-image based use-cases that can be explored in the future. Currently the app allows the traditional snapshot of a foreign language text and then shows a translated overlay, but imagine a future implementation where it’s able to do it live from the camera feed and allow for an AR experience. MediaTek at CES also showed a distinguishing use-case of using CNNs: for video conferencing the video encoder is fed metadata on scene composition by a CNN layer doing image recognition and telling the encoder to use finer-grained block sizes where a user’s face would be, thus increasing video quality. It’s more likely that neural network use-cases will slowly creep up with time rather than there being a new revolutionary thing, as more devices will start to incorporate such IPs and they become more widespread so will developers be more enticed to find uses for them.

An Introduction to Neural Network Processing Final Thoughts
Comments Locked

116 Comments

View All Comments

  • Wardrive86 - Monday, January 22, 2018 - link

    Also really a testament to the Adreno 500 series of GPUs..great performance with good energy consumption and good temps. Can't wait to see how the 600 series does
  • arvindgr - Monday, January 22, 2018 - link

    Nice article. Can someone highlight if chipset supports USB v3.x?? GsmArena lists USBv2 which is scarry for a flagship chip!
  • hescominsoon - Monday, January 22, 2018 - link

    Samsung does not use exynos in the US due to a license agreement with...Qualcomm. https://www.androidcentral.com/qualcomm-licensing-...

    I prefer exynos to the QC SOC's....
  • Wardrive86 - Monday, January 22, 2018 - link

    Why do you prefer exynos over snapdragon? Not being smart, just curious
  • tuxRoller - Monday, January 22, 2018 - link

    The 3% higher integer IPC?
  • lilmoe - Monday, January 22, 2018 - link

    I understand that it's out of your educational level to understand what makes an SoC better, since it has been explained to you by myself and others, so please stop using abbreviations like you're some sort of expert. Do you even know what IPC means? SMH...
  • tuxRoller - Tuesday, January 23, 2018 - link

    I'm assuming you've confused me with another.

    "The Exynos 8895 shows a 25% IPC uplift in CINT2006 and 21% uplift in CFP2006 whilst leading the A73 in overall IPC by a slight 3%."

    Yes, that's simply referencing the CPU, but that's a pretty important component and one whose prowess fans of Sam have enjoyed trumpeting.
  • Wardrive86 - Monday, January 22, 2018 - link

    Ok maybe...though I don't know what workload you would even be able to see "3% higher integer IPC" on a phone. The only workloads I'm running that even remotely tax these monsters, really come down to how well the Vulkan drivers pan out, it's ability to not thermally throttle all of its performance away and actually do this for awhile away from a charger. For these workloads Snapdragon is King as the Mali Vulkan/OpenGLes 3.x drivers are terrible in comparison. Again I was just curious
  • tuxRoller - Tuesday, January 23, 2018 - link

    @Wardrive86 I was being a bit facetious. I assume the person either prefers Samsung because of an association that's developed between the success of the company and their own sense of self-worth, or they like watching YouTube videos of proper opening a bunch of apps while a timer runs on the screen:)
  • Space Jam - Monday, January 22, 2018 - link

    >We’ve seen companies such as Nvidia try and repeatedly fail at carving out meaningful market-share.

    Don't think i'd call Nvidia's strategy for mobile SoCs as of the Shield Portable 'pursuing market-share' and I think their actual intentions have been more long-term with emphasis around the Drive CX/PX. The Shield devices were just a convenient way to monetize exploration into ARM and their custom Denver cores. Hence why we saw the Shield Portable and Tablet more or less die after one iteration; the SoCs were more or less there as an experiment. They weren't really prepared I think for the success the Shield TV has had and so that's gotten to see some evolution; the Nintendo Switch win is also nice for them but not really the focus. As much as I want to see a more current Tegra for a Shield Tablet (A73, Pascal cores, <=16nm) the Shield Tablet 2 was cancelled and doesn't look to be getting an update.

    >Meanwhile even Samsung LSI, while having a relatively good product with its flagship Exynos series, still has not managed to win over the trust of the conglomorate's own mobile division. Rather than using Exynos as an exclusive keystone component of the Galaxy series, Samsing has instead been dual-sourcing it along with Qualcomm’s Snapdragon SoCs. It’s therefore not hard to make the claim that producing competitive high-end SoCs and semiconductor components is a really hard business.

    We did see the Exynos 7420 with its Samsung sourced Exynos modem 333 which further adds to the questions of *why* Samsung bothers to source Snapdragons for the US. That's just extra development complexity on multiple levels. I always thought it had something to do with the cost of CDMA patent licensing, so they'd just opt to use Qualcomm's products and the Galaxy S6 was a special situation as Snapdragon was hot garbage.

    There has to be some reason that Samsung bothers with Snapdragon when their Exynos offerings perform pretty similarly.

Log in

Don't have an account? Sign up now