NPU Performance Tested

To test the performance of the NPU we need a benchmark which currently targets all of the various vendor APIs. Unfortunately at this stage short of developing our own implementation the choices are scarce, but luckily there is one: Popular Chinese benchmark suite Master Lu recently introduced an AI benchmark implementing both HiSilicon’s HiAI as well as Qualcomm’s SNPE frameworks. The benchmarks implements three different neural network models: VGG16, InceptionV3 as well as ResNet34. The input dataset are 100 images which are a subset of the ImageNet reference database. As a fall-back the app implements the TensorFlow inferencing library to run on the CPU. I’ve ran the performance figures on the Mate 10 Pro, Mate 9 as well as two Snapdragon 835 (Pixel 2 XL & V30) devices respectively running on the CPU as well as the Hexagon DSP.

Similarly to the SPEC2006 results I chose to use a more complex graph to better showcase the three dimensions of average power (W), efficiency (mJ/inference) as well as absolute performance (fps / inferences per second).

First thing we notice from the graph is that we can observe an order of magnitude difference in performance between the NPU and CPU implementations. Running the networks as they are on the CPUs we’re not able to exceed 1-2fps and we do so at very heavy CPU power consumption. Both the Snapdragon 835 as well as the Kirin 960 CPUs struggle with the workloads with average power exceeding sustainable workloads.

Qualcomm’s Hexagon DSP is able to improve on the CPU performance by a factor of 5-8x. But Huawei’s NPU performance figures are again several factors above that, showcasing up to a 4x lead in ResNet34. The reason for the different performance ratio differences between the different models is their design. Convolutional layers are heavily parallelisable whilst the polling and fully connected layers of the models must use more serial processing steps. ResNet in particular makes use of a larger percentage of convolution processing for a single inference and thus is able to achieve a higher utilization rate of the Kirin NPU.

In terms of power efficiency we’re very near to Huawei’s claims of up to a 50x improvement. This is the key characteristic that will enable CNNs to be used in real-world use-cases. I was quite surprised to see Qualcomm’s DSP reach similar efficiency levels as Huawei’s NPU – albeit at 1/3rd to 1/4th of the performance. This should bode quite well in terms of the Snapdragon 845’s Hexagon 685 which promises up to a 3x increase in performance.

I wanted to take the opportunity to make a rant about Google’s Pixel 2: I was able to actually run the benchmark on the Snapdragon 835’s CPU because the Pixel 2 devices lacked support for the SNPE framework. This was in a sense maybe both expected as well as unexpected. With the introduction of the NN API in Android 8.1, which the Pixel 2 phones support and use acceleration through the dedicated Pixel Visual Core SoC, it’s natural that Google would want to push usage of Android’s standard APIs. But on the other hand this is also a limitation on the capabilities of the phone by the OEM vendor which I can’t help but compare to the decision by Google to by default omit OpelCL in Android. This is a decision which in my eyes has heavily stifled the ecosystem and is why we don’t see more GPU accelerated compute workloads, out of which CNNs could have been one.

While we can’t run the Master Lu AI test on an iPhone, HiSilicon did publish some slides with reported internally numbers we can try to correlate. Based on the models included in the slide, the Apple A11 neural network IP’s performance should land somewhere slightly ahead of the Snapdragon 835’s DSP but still far behind the Kirin NPU, but again we can't independently verify these figures due to lack of a fitting iOS benchmark we can run ourselves.

Of course the important question is, what is this all good for? HiSilicon discloses that one use-case being used is noise reduction via CNN processing, and thus is able to increase voice recognition rate in heavy traffic from 80% to 92%.

The other most publicised use-case is the implementation in the camera app. The Mate 10’s camera makes use of the NPU to run inferencing to recognize different scenarios and optimize the camera settings based on pre-sets for those scenarios. The Mate 10 comes with a translation app which was developed with Microsoft, which is able to use the NPU for accelerated offline translation, and this was definitely the single most impressive usage for me. Inside the built-in gallery application we also see the use of image classification to create a new section where pictures are organized by content type. The former scenarios where the SoC is doing live inferencing on a media stream such as the camera feed is also the use-case where HiSilicon has an advantage over Qualcomm as employs both a DSP and the NPU whereas Snapdragon SoCs have to share the DSP resources between vision processing and neural network inferencing workloads.

Oddly enough the Kirin 970 has sort of double the silicon IP capable of running neural network efficiently as its vision pipeline also includes a Cadence Tensilica Vision P6 DSP which should be in the same performance class as Qualcomm’s Hexagon 680 DSP, but is currently not exposed for user applications.

While the Mate 10 does make some use of the NPU it’s hard to argue that it’s a definitive differentiating factor for the end-user. Currently neural network usage in mobile doesn’t seem to have the same killer-applications that they have in automotive and security camera sectors. Again this is due to the ecosystem being its early days and the Mate 10 among the first devices to actually offer such a dedicated acceleration block. It’s arguable if it’s worth it for the Kirin 970 to have implemented such a piece and Huawei is very open about the fact that it’s reaching out to developers to try and find more use-cases for the silicon, and at least Huawei should be lauded for innovating with something new.

Huawei/Microsoft's translation app seemed to be the most distinguished experience on the Mate 10 so maybe there’s more non-image based use-cases that can be explored in the future. Currently the app allows the traditional snapshot of a foreign language text and then shows a translated overlay, but imagine a future implementation where it’s able to do it live from the camera feed and allow for an AR experience. MediaTek at CES also showed a distinguishing use-case of using CNNs: for video conferencing the video encoder is fed metadata on scene composition by a CNN layer doing image recognition and telling the encoder to use finer-grained block sizes where a user’s face would be, thus increasing video quality. It’s more likely that neural network use-cases will slowly creep up with time rather than there being a new revolutionary thing, as more devices will start to incorporate such IPs and they become more widespread so will developers be more enticed to find uses for them.

An Introduction to Neural Network Processing Final Thoughts
Comments Locked

116 Comments

View All Comments

  • HStewart - Monday, January 22, 2018 - link

    One thing I would not mind Windows for ARM - if had the following

    1. Cheaper than current products - 300-400 range
    2. No need for x86 emulation - not need on such product - it would be good for Microsoft Office, email and internet machine. But not PC apps
  • StormyParis - Monday, January 22, 2018 - link

    But then why do you need WIndows to do that ? Android iOS and CHromme already do it, with a lot more other apps.
  • PeachNCream - Monday, January 22, 2018 - link

    It's too early in the Win10 on ARM product life cycle to call the entire thing a failure. I agree that it's possible we'll be calling it failed eventually, but the problems aren't solely limited to the CPU of choice. Right now, Win10 ARM platforms are priced too high (personal opinion) and _might_ be too slow doing the behind-the-scenes magic necessary to run x86 applications. Offering a lot more battery life, which Win10 on ARM does, isn't enough of a selling point to entirely offset the pricing and limitations. While I'd like to get 22 hours of battery life doing useful work with wireless active out of my laptops, it's more off mains time than I can realistically use in a day so I'm okay with a lower priced system with shorter life (~5 hours) since I use my phone for multi-day, super light computing tasks already. That doesn't mean everyone feels that way so let's wait and see before getting out the hammer and nails for that coffin.
  • jjj - Monday, January 22, 2018 - link

    The CPU is the reason for the high price, SD835 comes at a high premium and LTE adds to it.
    That's why those machines are not competitive in price with Atom based machines.
    Use a 25$ SoC and no LTE and Windows on ARM becomes viable with an even longer battery life.
  • PeachNCream - Monday, January 22, 2018 - link

    I didn't realize the 835 accounted for so much of the BOM on those ARM laptops. Since Intel's tray pricing for their low end chips isn't exactly cheap (not factoring in OEM/volume discounts), it didn't strike me as a significant hurdle. I'd thought most of the price as due to low production volume and attempts to make the first generation's build quality attractive enough to have a ripple effect on subsequently cheaper models.
  • tuxRoller - Monday, January 22, 2018 - link

    I'm not sure they do.
    A search indicated that in 2014 the average price of a Qualcomm solution for a platform was $24. The speculation was that the high-end socs were sold in the high $30s to low $40s.

    https://www.google.com/amp/s/www.fool.com/amp/inve...
  • jjj - Monday, January 22, 2018 - link

    It's likely more like 50-60$ for the hardware and 15$ for licensing for a 700$ laptop- although that includes only licenses to Qualcomm and they are not the only ones getting payed.
    Even a very optimistic estimate can't go lower than 70$ total and that's a large premium vs my suggestion of a 25$ SoC with no LTE.
    An 8 cores A53 might go below 10$, something like Helio X20 was around 20$ at it's time, one would assume that SD670 will be 25-35$, depending on how competitive Mediatek is with P70.
  • jjj - Monday, January 22, 2018 - link

    Some estimates will go much higher though (look at LTE enabling components too ,not just SoC for the S8). http://www.techinsights.com/about-techinsights/ove...
    Don't think costs are quite that high but they are supposed to know better.
  • tuxRoller - Monday, January 22, 2018 - link

    That's way higher than I've seen.

    http://mms.businesswire.com/media/20170420006675/e...

    Now, that's for the exynos 8895, but is imagine prices are similar for Snapdragon.
    Regardless, these are all estimates. I'm not aware of anyone who actually knows the real prices of these (including licenses) we has come out and told us.
  • jjj - Monday, January 22, 2018 - link

    On licensing you can take a look at the newest 2 pdfs here https://www.qualcomm.com/invention/licensing.
    Those are in line with the China agreement they have at 3.5% and 5% out of 65% of the retail value. There would be likely discounts for exclusivity and so on. So ,assuming multinode, licensing would be 22.75$ for a 700$ laptop, before any discounts (if any) BUT that's only to Qualcomm and not others like Nokia, Huawei, Samsung, Ericsson and whoever else might try to milk this.

    As for SoC, here's IHS for a SD835 phone https://technology.ihs.com/584911/google-pixel-xl-...

Log in

Don't have an account? Sign up now