System & ML Performance

Having investigated the new A13’s CPU performance, it’s time to look at how it performs in some system-level tests. Unfortunately there’s still a frustrating lack of proper system tests for iOS, particularly when it comes to tests like PCMark that would more accurately represent application use-cases. In lieu of that, we have to fall back to browser-based benchmarks. Browser performance is still an important aspect of device performance, as it remains one of the main workloads that put large amounts of stress on the CPU while exhibiting performance characteristics such as performance latency (essentially, responsiveness).

As always, the following benchmarks aren’t just a representation of the hardware capabilities, but also the software optimizations of a phone. iOS13 has again increased browser-based benchmarks performance by roughly 10% in our testing. We’ve gone ahead and updated the performance figures of previous generation iPhones with new scores on iOS13 to have proper Apple-to-Apple comparisons for the new iPhone 11’s.

Speedometer 2.0 - OS WebView

In Speedometer 2.0 we see the new A13 based phones exhibit a 19-20% performance increase compared to the previous generation iPhone XS and the A12. The increase is in-line with Apple’s performance claims. The increase this year is a bit smaller than what we saw last year with the A12, as it seems the main boost to the scores last year was the upgrade to a 128KB L1I cache.

JetStream 2 - OS Webview

JetStream 2 is a newer browser benchmark that was released earlier this year. The test is longer and possibly more complex than Speedometer 2.0 – although we still have to do proper profiling of the workload. The A13’s increases here are about 13%. Apple’s chipsets, CPUs, and custom Javascript engine continue to dominate the mobile benchmarks, posting double the performance we see from the next-best competition.

WebXPRT 3 - OS WebView

Finally WebXPRT represents more of a “scaling” workload that isn’t as steady-state as the previous benchmarks. Still, even here the new iPhones showcase a 18-19% performance increase.

Last year Apple made big changes to the kernel scheduler in iOS12, and vastly shortened the ramp-up time of the CPU DVFS algorithm, decreasing the time the system takes to transition from lower idle frequencies and small cores idle to full performance of the large cores. This resulted in significantly improved device responsiveness across a wide range of past iPhone generations.

Compared to the A12, the A13 doesn’t change all that much in terms of the time it takes to reach the maximum clock-speed of the large Lightning cores, with the CPU core reaching its peak in a little over 100ms.

What does change a lot is the time the workload resides on the smaller Thunder efficiency cores. On the A13 the small cores are ramping up significantly faster than on the A12. There’s also a major change in the scheduler behavior and when the workload migrates from the small cores to the large cores. On the A13 this now happens after around 30ms, while on the A12 this would take up to 54ms. Due to the small cores no longer being able to request higher memory controller performance states on their own, it likely makes sense to migrate to the large cores sooner now in the case of a more demanding workload.

The A13’s Lightning cores are start off at a base frequency of around 910MHz, which is a bit lower than the A12 and its base frequency of 1180MHz. What this means is that Apple has extended the dynamic range of the large cores in the A13 both towards higher performance as well as towards the lower, more efficient frequencies.

Machine Learning Inference Performance

Apple has also claimed to have increased the performance of their neural processor IP block in the A13. To use this unit, you have to make use of the CoreML framework. Unfortunately we don’t have a custom tool for testing this as of yet, so we have to fall back to one of the rare external applications out there which does provide a benchmark for this, and that’s Master Lu’s AIMark.

Like the web-browser workloads, iOS13 has brought performance improvements for past devices, so we’ve rerun the iPhone X and XS scores for proper comparisons to the new iPhone 11.

鲁大师 / Master Lu - AIMark 3 - InceptionV3 鲁大师 / Master Lu - AIMark 3 - ResNet34 鲁大师 / Master Lu - AIMark 3 - MobileNet-SSD 鲁大师 / Master Lu - AIMark 3 - DeepLabV3

The improvements for the iPhone 11 and the new A13 vary depending on the model and workload. For the classical models such as InceptionV3 and ResNet34, we’re seeing 23-29% improvements in the inference rate. MobileNet-SSD sees are more limited 17% increase, while DeepLabV3 sees a major increase of 48%.

Generally, the issue of running machine learning benchmarks is that it’s running through an abstraction layer, in this case which is CoreML. We don’t have guarantees on how much of the model is actually being run on the NPU versus the CPU and GPU, as things can differ a lot depending on the ML drivers of the device.

Nevertheless, the A13 and iPhone 11 here are very competitive and provide good iterative performance boosts for this generation.

Performance Conclusion

Overall, performance on the iPhone 11s is excellent, as we've come to expect time and time again from Apple. With that said, however, I can’t really say that I notice too much of a difference to the iPhone XS in daily usage. So while the A13 delivers class leading performance, it's probably not going to be very compelling for users coming from last year's A12 devices; the bigger impact will be felt coming from older devices. Otherwise, with this much horsepower I feel like the user experience would benefit significantly more from an option to accelerate application and system animations, or rather even just turn them off completely, in order to really feel the proper snappiness of the hardware.

SPEC2006 Perf: Desktop Levels, New Mobile Power Heights GPU Performance & Power
Comments Locked

242 Comments

View All Comments

  • Alistair - Wednesday, October 16, 2019 - link

    mwah ha ha ha... hahahaha. .. .. . hahahah hahaha.... sorry while I break down laughing incoherently. You criticize using SPEC to compare a mobile vs a desktop CPU, then you post links to garbage "look how fast the iPhone is at opening apps over and over again for minutes at a time" as if that is anything but the worst click bait "benching" you could possible find on the internet.
  • joms_us - Wednesday, October 16, 2019 - link

    If you are into software engineering, you know running a website or games does those nonsense calculations that SPEC of GB does right? So I'd rather see realworld results than cherry-picked bloated score from SPEC or GB.
  • Alistair - Wednesday, October 16, 2019 - link

    Since you think those youtube videos indicate anything at all, your judgement has been called into question. I don't think your opinion of SPEC is worth repeating.
  • joms_us - Thursday, October 17, 2019 - link

    An SoC is useless without the other components, so you are testing the whole phone because that is what you are going to use. Sadly, in this regard, the cherrypicked modules in benchmark both in SPEC and GB does not translate to realworld for Apple.
  • WinterCharm - Thursday, October 17, 2019 - link

    App opening on a phone doesn't test anything, except disk read/write speeds and animation speeds. You're an idiot if you think these tests show anything.
  • joms_us - Thursday, October 17, 2019 - link

    Idiot? Looks who is talking? How can an app display those UI, information, graphics l, sound etc if it is not using the cpu? The runtime/compiler does everything (read/write/mem copy/cut/compress/decompress/sort/math etc. ) for you so you see them otherwise they are just a bunch of worthless text or numbers like what these SPEC and GB are showing.
  • Irish910 - Wednesday, October 16, 2019 - link

    Actually, if you looked at the graph comparing the A9-A13, it’s pretty clear that this chip can hang with the Skylake. All in a packaged phone design with no fans or active cooling. They’re all compiled the same way when run on Spec. So stop being salty.
  • WinterCharm - Wednesday, October 16, 2019 - link

    You speak as though Spec2006 is a bad benchmark, lol.
  • joms_us - Wednesday, October 16, 2019 - link

    Yep, 2006? Seriously?
    Run iPhone in Windows or Linux then comeback with the results.
    These nonsense crossplatform comparison of GB and SPEC are nothing if they are not running on the same OS.
  • tipoo - Thursday, October 17, 2019 - link

    >Run iPhone in Windows or Linux t

    So you're setting your standard at results that are impossible to show you. You don't really think that comes off as a win on your end of the argument, do you?

Log in

Don't have an account? Sign up now