NPU Performance Tested

To test the performance of the NPU we need a benchmark which currently targets all of the various vendor APIs. Unfortunately at this stage short of developing our own implementation the choices are scarce, but luckily there is one: Popular Chinese benchmark suite Master Lu recently introduced an AI benchmark implementing both HiSilicon’s HiAI as well as Qualcomm’s SNPE frameworks. The benchmarks implements three different neural network models: VGG16, InceptionV3 as well as ResNet34. The input dataset are 100 images which are a subset of the ImageNet reference database. As a fall-back the app implements the TensorFlow inferencing library to run on the CPU. I’ve ran the performance figures on the Mate 10 Pro, Mate 9 as well as two Snapdragon 835 (Pixel 2 XL & V30) devices respectively running on the CPU as well as the Hexagon DSP.

Similarly to the SPEC2006 results I chose to use a more complex graph to better showcase the three dimensions of average power (W), efficiency (mJ/inference) as well as absolute performance (fps / inferences per second).

First thing we notice from the graph is that we can observe an order of magnitude difference in performance between the NPU and CPU implementations. Running the networks as they are on the CPUs we’re not able to exceed 1-2fps and we do so at very heavy CPU power consumption. Both the Snapdragon 835 as well as the Kirin 960 CPUs struggle with the workloads with average power exceeding sustainable workloads.

Qualcomm’s Hexagon DSP is able to improve on the CPU performance by a factor of 5-8x. But Huawei’s NPU performance figures are again several factors above that, showcasing up to a 4x lead in ResNet34. The reason for the different performance ratio differences between the different models is their design. Convolutional layers are heavily parallelisable whilst the polling and fully connected layers of the models must use more serial processing steps. ResNet in particular makes use of a larger percentage of convolution processing for a single inference and thus is able to achieve a higher utilization rate of the Kirin NPU.

In terms of power efficiency we’re very near to Huawei’s claims of up to a 50x improvement. This is the key characteristic that will enable CNNs to be used in real-world use-cases. I was quite surprised to see Qualcomm’s DSP reach similar efficiency levels as Huawei’s NPU – albeit at 1/3rd to 1/4th of the performance. This should bode quite well in terms of the Snapdragon 845’s Hexagon 685 which promises up to a 3x increase in performance.

I wanted to take the opportunity to make a rant about Google’s Pixel 2: I was able to actually run the benchmark on the Snapdragon 835’s CPU because the Pixel 2 devices lacked support for the SNPE framework. This was in a sense maybe both expected as well as unexpected. With the introduction of the NN API in Android 8.1, which the Pixel 2 phones support and use acceleration through the dedicated Pixel Visual Core SoC, it’s natural that Google would want to push usage of Android’s standard APIs. But on the other hand this is also a limitation on the capabilities of the phone by the OEM vendor which I can’t help but compare to the decision by Google to by default omit OpelCL in Android. This is a decision which in my eyes has heavily stifled the ecosystem and is why we don’t see more GPU accelerated compute workloads, out of which CNNs could have been one.

While we can’t run the Master Lu AI test on an iPhone, HiSilicon did publish some slides with reported internally numbers we can try to correlate. Based on the models included in the slide, the Apple A11 neural network IP’s performance should land somewhere slightly ahead of the Snapdragon 835’s DSP but still far behind the Kirin NPU, but again we can't independently verify these figures due to lack of a fitting iOS benchmark we can run ourselves.

Of course the important question is, what is this all good for? HiSilicon discloses that one use-case being used is noise reduction via CNN processing, and thus is able to increase voice recognition rate in heavy traffic from 80% to 92%.

The other most publicised use-case is the implementation in the camera app. The Mate 10’s camera makes use of the NPU to run inferencing to recognize different scenarios and optimize the camera settings based on pre-sets for those scenarios. The Mate 10 comes with a translation app which was developed with Microsoft, which is able to use the NPU for accelerated offline translation, and this was definitely the single most impressive usage for me. Inside the built-in gallery application we also see the use of image classification to create a new section where pictures are organized by content type. The former scenarios where the SoC is doing live inferencing on a media stream such as the camera feed is also the use-case where HiSilicon has an advantage over Qualcomm as employs both a DSP and the NPU whereas Snapdragon SoCs have to share the DSP resources between vision processing and neural network inferencing workloads.

Oddly enough the Kirin 970 has sort of double the silicon IP capable of running neural network efficiently as its vision pipeline also includes a Cadence Tensilica Vision P6 DSP which should be in the same performance class as Qualcomm’s Hexagon 680 DSP, but is currently not exposed for user applications.

While the Mate 10 does make some use of the NPU it’s hard to argue that it’s a definitive differentiating factor for the end-user. Currently neural network usage in mobile doesn’t seem to have the same killer-applications that they have in automotive and security camera sectors. Again this is due to the ecosystem being its early days and the Mate 10 among the first devices to actually offer such a dedicated acceleration block. It’s arguable if it’s worth it for the Kirin 970 to have implemented such a piece and Huawei is very open about the fact that it’s reaching out to developers to try and find more use-cases for the silicon, and at least Huawei should be lauded for innovating with something new.

Huawei/Microsoft's translation app seemed to be the most distinguished experience on the Mate 10 so maybe there’s more non-image based use-cases that can be explored in the future. Currently the app allows the traditional snapshot of a foreign language text and then shows a translated overlay, but imagine a future implementation where it’s able to do it live from the camera feed and allow for an AR experience. MediaTek at CES also showed a distinguishing use-case of using CNNs: for video conferencing the video encoder is fed metadata on scene composition by a CNN layer doing image recognition and telling the encoder to use finer-grained block sizes where a user’s face would be, thus increasing video quality. It’s more likely that neural network use-cases will slowly creep up with time rather than there being a new revolutionary thing, as more devices will start to incorporate such IPs and they become more widespread so will developers be more enticed to find uses for them.

An Introduction to Neural Network Processing Final Thoughts
Comments Locked

116 Comments

View All Comments

  • jjj - Monday, January 22, 2018 - link

    Sorry it's a SD821 phone
  • tuxRoller - Tuesday, January 23, 2018 - link

    Thanks for that ihs link. I just wasn't able to find a recent bom which included a snapdragon that wasn't behind a pay wall:/
    The first link wasn't working but I found others on the Qualcomm site. They list licensing terms of 2.275% (5G only) or 3.25% (multimode). Given that, I agree that offering an arm laptop that doesn't include a (working) baseband makes more sense.

    https://www.qualcomm.com/documents/qualcomm-5g-nr-...
    https://www.qualcomm.com/documents/examples-of-roy...
  • jjj - Monday, January 22, 2018 - link

    Well the chip and LTE (LTE means hardware+ licensing costs) does not add hundreds of dollars to the retail price but the extra cost forces them to position these as high end.
    A mobile SoC has actually some costs positives too as it offers more integration, thus slightly reducing costs but with a high end SoC and LTE things go sideways.
    I was telling people before these got release that they'll be high specked with high prices but even I wasn't expecting things to be this bad and thought they'll at least have higher res displays at current prices.

    Give me a laptop with SD670 (yet to be announced) and no LTE at 300$ and I might consider it. Oh well, at least we have Raven Ridge now.
  • lilmoe - Monday, January 22, 2018 - link

    Raven Ridge is where it's at. Let's hope it doesn't disappoint.
  • Manch - Monday, January 22, 2018 - link

    Maybe Huawei is late to the party bc they need time to integrate "features" at the behest of the Chinese govt?
  • StormyParis - Monday, January 22, 2018 - link

    If you mean "move all their servers to a gov'-owned and operated facility", that's Apple China.
  • Manch - Tuesday, January 23, 2018 - link

    Right now the US gov is very concerned about Huawei to the point theyre pressuring ATT to stop using their products. In addition they don't like them being involved in next gen wireless bc the security risk involved. To be fair the company is top down filled with Chinese Government Official.

    As for Apple, they're not the only US or EU company that has given up IP to the Chinese government in order to play in their backyard. Of course that comes at a cost in the long run.

    It will be interesting to see what happens over the next few years between China, the EU & US ovwer this issue.
  • fteoath64 - Thursday, January 25, 2018 - link

    At the rate China is pouring money into AI with little to zero oversight, they are the first country to be pawned by SuperAI (first AGI that is superhuman), from there, the democratization of rights and freedom will accelerate. Maybe a bit turbulent in the adjustment period but will prevail. The process is already in motion for some months ....
  • jospoortvliet - Saturday, January 27, 2018 - link

    What is in motion? Democracy? With Xi in power out is rather going the other way around. The progress chins had made inn the decades since tianmen square is going to be wiped out soon...

    Meanwhile the current AI is as far from the type of generic AI you talk about as we were from useful neural networks in the early '80s... Don't count on it soon.
  • french toast - Monday, January 22, 2018 - link

    Nice article Andrei.
    Just demonstrates how much Qualcomm is killing it right now, gpu is nearly twice as efficient as Mali, likely much more efficient in area also.
    Even the hexagon 680 DSP, which is not a special AI processor on its own but can match the efficiency of likely the best AI processor in smartphones... Huawei NPU.
    Aside from the horrible mistakes of the Snapdragon 810 & 820...they seem to have got their CPU/SOC decisions in order.

    9810 Vs 845 is going to be a battle royale, Samsung M3 might well turn the tables around.

Log in

Don't have an account? Sign up now