Final Words

HiSilicon’s Kirin 950 delivered impressive performance and efficiency, raising our expectations for its successor. And on paper at least, the Kirin 960 seems better in every way. It incorporates ARM’s latest IP, including A73 CPUs, the new Mali-G71 GPU with more cores, and a CCI-550 interconnect. It offers other improvements too, such as a new modem that supports higher LTE speeds and UFS 2.1 support. But when it comes to performance and efficiency, the Kirin 960 improves in some areas and regresses in others.

The Kirin 960’s A73 CPU cores are marginally faster than the 950’s A72 cores when handling integer workloads, with a more noticeable lead over Qualcomm’s Kryo and the older A57. When looking at floating-point IPC, the opposite is true, with Qualcomm’s Kryo and Kirin 950’s A72 cores posting better results than the 960’s A73.

Some of this performance regression may be explained by Kirin 960’s memory performance. Both latency and read bandwidth improve for its larger 64KB L1 cache, but write bandwidth is lower than Kirin 950. The 960’s L2 cache bandwidth is also lower for both read and write. Its latency to main memory improves by 25%, however, and bandwidth improves by an impressive 69%.

What’s really disappointing (and puzzling) about Kirin 960, though, is that its CPU efficiency is actually worse than the 950’s. ARM did a lot of work to reduce the A73’s power consumption relative to the A72, but the Kirin 960’s A73 cores see a substantial power increase over the 950’s A72 cores. The poor efficiency numbers are likely a combination of HiSilicon’s specific implementation and the switch to the 16FFC process. This was definitely an unexpected result considering the Mate 9’s excellent battery life. Fortunately, Huawei was able to save power elsewhere, such as the display, to make up for the SoC’s power increase, but it’s difficult not to think about how much better the battery life could have been.

Power consumption for Kirin 960’s GPU is even worse, with peak power numbers that are entirely inappropriate for a smartphone. Part of the problem is poor efficiency, again likely a combination of implementation and process, which is only made worse by an overly aggressive 1037MHz peak operating point that only serves to improve the spec sheet and benchmark results.

The Kirin 960 is difficult to categorize. It’s definitely not a clear upgrade over the 950, but it does just enough things right that we cannot dismiss it outright either. For example, its generally improved integer performance and lower system memory latency give it an advantage over the 950 in many real-world workloads. We cannot completely condemn its GPU either, because its sustained performance, at least in the Mate 9’s large aluminum chassis, is on par with or better than competing flagship phones, as is its battery life when gaming. Certainly the Mate 9 proves that Kirin 960 is a viable flagship SoC as long as Huawei puts in the effort to work around its flaws. But with a new generation of 10nm SoCs just around the corner, those flaws will only become more apparent.

GPU Power Consumption and Thermal Stability
Comments Locked

86 Comments

View All Comments

  • MajGenRelativity - Tuesday, March 14, 2017 - link

    I'm a dunce sometimes. I totally missed that. Thank you Ian!
  • fanofanand - Tuesday, March 14, 2017 - link

    I love that you have begun moderating (to a degree) the comments section! It's nice to have someone with so much knowledge there to dispel the FUD! Not saying his question was bad, but I really do like that you are getting in the mud with us plebs :)
  • MajGenRelativity - Tuesday, March 14, 2017 - link

    My question wasn't bad, just stupid :P Should have read that page a little more closely.
  • fanofanand - Tuesday, March 14, 2017 - link

    I didn't mean to imply your question was bad at all, and I certainly wasn't lumping you in with those spreading FUD, but Ian has become a growing presence in the comments section and I for one like what he's doing. The comments section in nearly every tech article has become ugly, and having a calming, logical, rational presence like Ian only helps to contribute to a more polite atmosphere where disagreement can be had without presuming that the person with an opposing viewpoint is Hitler.
  • MajGenRelativity - Tuesday, March 14, 2017 - link

    I thought this was the Internet, where the opposing viewpoint is always Hitler? :P
  • fanofanand - Tuesday, March 14, 2017 - link

    Hitler has become omnipresent, now the Barrista who underfoams your latte must be Hitler!
  • lilmoe - Tuesday, March 14, 2017 - link

    Shouldn't this provide you with even more evidence that max frequency workloads are super artificial, and are completely unrepresentative of normal, day-to-day workloads? This further supports my claim in earlier article comments that chip designers are targeting a certain performance target, and optimizing efficiency for that point in particular.

    I keep saying this over and over (like a broken record at this point), but I do firmly believe that the benchmarking methodology for mobile parts of the entire blogsphere is seriously misleading. You're testing these processors the same way you would normally do for workstation processors. The author even said it himself, but the article contradicts his very statement. I believe further research/investigations should be done as to where that performance target is. It definitely defers from year to year, with different popular app trends, and from OS upgrade to another.

    Spec, Geekbench and browser benchmarks, if tested in context of same device, same OS upgrades, are a good indication of what the chip can artificially achieve. But the real test, I believe, is launching a website, using facebook, snapchat, etc., and comparing power draw of various chips, since that's what these chips were designed to run.

    There's also the elephant in the room that NO ONE is accounting for when testing and benchmarking, and that's touch input overhead. Most user interaction is through touch. I don't know about iOS, but everyone knows that Android ramps up the clock when the touchscreen detects input to reduce lag and latency. Your browser battery test DO NOT account for that, further reducing its potential credibility as a valid representation of actual usage.

    I mention touch input clock ramps in particular because I believe this is the clock speed that OEMs believe it delivers optimal efficiency on the performance curve for a given SoC, at least for the smaller cluster. A better test would be logging the CPU clocks of certain workloads, and taking the average, then calculating the power draw of the CPU on that particular average clock.

    This is where I believe Samsung's SoCs shine the most. I believe they deliver the best efficiency for common workloads, evident in the battery life of their devices after normalization of screen size/resolution to battery capacity.

    Worth investigating IMO.
  • fanofanand - Tuesday, March 14, 2017 - link

    If you can come up with a methodology where opening snapchat is a repeatable scientific test, send your hypothesis to Ryan, I'm sure he will indulge your fantasy.
  • lilmoe - Tuesday, March 14, 2017 - link

    Yea, we all love fantasies. Thing is, in the last couple of paragraphs, Matt literally said that the entirety of the review does not match with the actual real-world performance and battery life of the Mate 9.

    But sure, go ahead and keep testing mobile devices using these "scientific" conventional anyway, since it makes readers like fanofanand happy.
  • close - Tuesday, March 14, 2017 - link

    That is, of course, an awesome goal. Now imagine the next review the battery life varies between 10 and 18 hours even on the same phone. Now judge for yourself if this kind of result is more useful to determine which phone has a better battery life. Not only is your real world usage vastly different from mine (thus irrelevant) but you yourself can't even get through 2 days with identical battery life or identical usage. If you can't determine one phone's battery life properly how do you plan on comparing that figure to the ones I come up with?

    If you judged your comment by the same standards you judge the article you wouldn't have posted it. You implicitly admit there's no good way of testing in the manner you suggest (by refusing or being unable to provide a clearly better methodology) but still insisted on posting it. I will join the poster above in asking you to suggest something better. And don't skimp on the details. I'm sure that if you have a reasonable proposal it will be taken into consideration not for your benefit but for all of ours.

    Some of these benchmarks try to simulate a sort of average real world usage (a little bit of everything) in a reproducible manner in order to be used in a comparison. That won't be 100% relevant but there is a good overlap and it's the best comparative tool we've got. Your generic suggestion would most likely provide even less relevant figures unless you come up with that better scenario that you insist on keeping to yourself.

Log in

Don't have an account? Sign up now