System & ML Performance

Having investigated the new A13’s CPU performance, it’s time to look at how it performs in some system-level tests. Unfortunately there’s still a frustrating lack of proper system tests for iOS, particularly when it comes to tests like PCMark that would more accurately represent application use-cases. In lieu of that, we have to fall back to browser-based benchmarks. Browser performance is still an important aspect of device performance, as it remains one of the main workloads that put large amounts of stress on the CPU while exhibiting performance characteristics such as performance latency (essentially, responsiveness).

As always, the following benchmarks aren’t just a representation of the hardware capabilities, but also the software optimizations of a phone. iOS13 has again increased browser-based benchmarks performance by roughly 10% in our testing. We’ve gone ahead and updated the performance figures of previous generation iPhones with new scores on iOS13 to have proper Apple-to-Apple comparisons for the new iPhone 11’s.

Speedometer 2.0 - OS WebView

In Speedometer 2.0 we see the new A13 based phones exhibit a 19-20% performance increase compared to the previous generation iPhone XS and the A12. The increase is in-line with Apple’s performance claims. The increase this year is a bit smaller than what we saw last year with the A12, as it seems the main boost to the scores last year was the upgrade to a 128KB L1I cache.

JetStream 2 - OS Webview

JetStream 2 is a newer browser benchmark that was released earlier this year. The test is longer and possibly more complex than Speedometer 2.0 – although we still have to do proper profiling of the workload. The A13’s increases here are about 13%. Apple’s chipsets, CPUs, and custom Javascript engine continue to dominate the mobile benchmarks, posting double the performance we see from the next-best competition.

WebXPRT 3 - OS WebView

Finally WebXPRT represents more of a “scaling” workload that isn’t as steady-state as the previous benchmarks. Still, even here the new iPhones showcase a 18-19% performance increase.

Last year Apple made big changes to the kernel scheduler in iOS12, and vastly shortened the ramp-up time of the CPU DVFS algorithm, decreasing the time the system takes to transition from lower idle frequencies and small cores idle to full performance of the large cores. This resulted in significantly improved device responsiveness across a wide range of past iPhone generations.

Compared to the A12, the A13 doesn’t change all that much in terms of the time it takes to reach the maximum clock-speed of the large Lightning cores, with the CPU core reaching its peak in a little over 100ms.

What does change a lot is the time the workload resides on the smaller Thunder efficiency cores. On the A13 the small cores are ramping up significantly faster than on the A12. There’s also a major change in the scheduler behavior and when the workload migrates from the small cores to the large cores. On the A13 this now happens after around 30ms, while on the A12 this would take up to 54ms. Due to the small cores no longer being able to request higher memory controller performance states on their own, it likely makes sense to migrate to the large cores sooner now in the case of a more demanding workload.

The A13’s Lightning cores are start off at a base frequency of around 910MHz, which is a bit lower than the A12 and its base frequency of 1180MHz. What this means is that Apple has extended the dynamic range of the large cores in the A13 both towards higher performance as well as towards the lower, more efficient frequencies.

Machine Learning Inference Performance

Apple has also claimed to have increased the performance of their neural processor IP block in the A13. To use this unit, you have to make use of the CoreML framework. Unfortunately we don’t have a custom tool for testing this as of yet, so we have to fall back to one of the rare external applications out there which does provide a benchmark for this, and that’s Master Lu’s AIMark.

Like the web-browser workloads, iOS13 has brought performance improvements for past devices, so we’ve rerun the iPhone X and XS scores for proper comparisons to the new iPhone 11.

鲁大师 / Master Lu - AIMark 3 - InceptionV3 鲁大师 / Master Lu - AIMark 3 - ResNet34 鲁大师 / Master Lu - AIMark 3 - MobileNet-SSD 鲁大师 / Master Lu - AIMark 3 - DeepLabV3

The improvements for the iPhone 11 and the new A13 vary depending on the model and workload. For the classical models such as InceptionV3 and ResNet34, we’re seeing 23-29% improvements in the inference rate. MobileNet-SSD sees are more limited 17% increase, while DeepLabV3 sees a major increase of 48%.

Generally, the issue of running machine learning benchmarks is that it’s running through an abstraction layer, in this case which is CoreML. We don’t have guarantees on how much of the model is actually being run on the NPU versus the CPU and GPU, as things can differ a lot depending on the ML drivers of the device.

Nevertheless, the A13 and iPhone 11 here are very competitive and provide good iterative performance boosts for this generation.

Performance Conclusion

Overall, performance on the iPhone 11s is excellent, as we've come to expect time and time again from Apple. With that said, however, I can’t really say that I notice too much of a difference to the iPhone XS in daily usage. So while the A13 delivers class leading performance, it's probably not going to be very compelling for users coming from last year's A12 devices; the bigger impact will be felt coming from older devices. Otherwise, with this much horsepower I feel like the user experience would benefit significantly more from an option to accelerate application and system animations, or rather even just turn them off completely, in order to really feel the proper snappiness of the hardware.

SPEC2006 Perf: Desktop Levels, New Mobile Power Heights GPU Performance & Power
POST A COMMENT

239 Comments

View All Comments

  • FreckledTrout - Wednesday, October 16, 2019 - link

    Shh, adults are talking. Reply
  • eastcoast_pete - Wednesday, October 16, 2019 - link

    What can I say? I'm old-fashioned that way: for me, a smartPHONE has to work as a phone to justify its name. As I wrote, I tried out some otherwise capable mobiles, great screen and all, but they really tanked on call quality and reception. Reply
  • eastcoast_pete - Wednesday, October 16, 2019 - link

    Forgot to add: And, I'm not even expecting landline quality, just a bit better than Armstrong's voice from the moon 50 years ago. Plus, his call to Earth wasn't dropped. Not too much to ask, is it? Reply
  • Drumsticks - Wednesday, October 16, 2019 - link

    My guess is this is too hard to measure repeatably and precisely. I wouldn't mind some anecdotal opinions, but at that point you might as well get those from a different website anyways. Reply
  • eastcoast_pete - Thursday, October 17, 2019 - link

    An anecdotal or purely qualitative statement would suffice. No fancy analysis. "Loud and clear" vs. "hard to understand", spotty, dropped words etc is plenty to go on, and takes only a few minutes of testing. Reply
  • Pro-competition - Friday, October 18, 2019 - link

    Internet calls such as Whatsapp / FB calls for me have mostly been inferior to "normal calls" which use the Public Switched Telephone Network. For this reason, I still pay telcos an additional fee to make such "normal calls".

    PS: This is from someone who lives in a country where 1Gbps home fibre and 4G+ has been prevalent for many years now, and is about to roll-out 5G next year.
    Reply
  • shompa - Thursday, October 17, 2019 - link

    Signal strength? The only situation that is plausible is in the woods. Otherwise: just enable WiFi calling. Signal strength is no issue then. Reply
  • eastcoast_pete - Thursday, October 17, 2019 - link

    For performance with low signal strength, my building's basement is another, reproducible example (1-2 bars of 5 max), so no walk in the forest required. I believe that Andrei could find a convenient location where his carrier of choice has low signal, and just test it there. I am not looking for dB levels, just a qualitative statement. Reply
  • Someguyperson - Wednesday, October 16, 2019 - link

    Your iPhone 11 Pro GPU efficiency numbers are off. The "warm" values aren't even on the chart. You need to take the sustained values from the chart and redo the calculations in the tables. Reply
  • trparky - Wednesday, October 16, 2019 - link

    "So while the A13 delivers class leading performance, it's probably not going to be very compelling for users coming from last year's A12 devices; the bigger impact will be felt coming from older devices."

    Like no shit! I upgraded from an older iPhone 7 Plus to the iPhone 11 Pro and comparatively speaking, the iPhone 11 Pro is stupid quick (that's a good thing!!!) Everything about it is so much faster. App launch times are at least twice as fast on the iPhone 11 Pro (vs. iPhone 7 Plus) and the battery life is just... wow. I don't have words to describe how good it is on the iPhone 11 Pro when compared to the iPhone 7 Plus.
    Reply

Log in

Don't have an account? Sign up now