System & ML Performance

Having investigated the new A13’s CPU performance, it’s time to look at how it performs in some system-level tests. Unfortunately there’s still a frustrating lack of proper system tests for iOS, particularly when it comes to tests like PCMark that would more accurately represent application use-cases. In lieu of that, we have to fall back to browser-based benchmarks. Browser performance is still an important aspect of device performance, as it remains one of the main workloads that put large amounts of stress on the CPU while exhibiting performance characteristics such as performance latency (essentially, responsiveness).

As always, the following benchmarks aren’t just a representation of the hardware capabilities, but also the software optimizations of a phone. iOS13 has again increased browser-based benchmarks performance by roughly 10% in our testing. We’ve gone ahead and updated the performance figures of previous generation iPhones with new scores on iOS13 to have proper Apple-to-Apple comparisons for the new iPhone 11’s.

Speedometer 2.0 - OS WebView

In Speedometer 2.0 we see the new A13 based phones exhibit a 19-20% performance increase compared to the previous generation iPhone XS and the A12. The increase is in-line with Apple’s performance claims. The increase this year is a bit smaller than what we saw last year with the A12, as it seems the main boost to the scores last year was the upgrade to a 128KB L1I cache.

JetStream 2 - OS Webview

JetStream 2 is a newer browser benchmark that was released earlier this year. The test is longer and possibly more complex than Speedometer 2.0 – although we still have to do proper profiling of the workload. The A13’s increases here are about 13%. Apple’s chipsets, CPUs, and custom Javascript engine continue to dominate the mobile benchmarks, posting double the performance we see from the next-best competition.

WebXPRT 3 - OS WebView

Finally WebXPRT represents more of a “scaling” workload that isn’t as steady-state as the previous benchmarks. Still, even here the new iPhones showcase a 18-19% performance increase.

Last year Apple made big changes to the kernel scheduler in iOS12, and vastly shortened the ramp-up time of the CPU DVFS algorithm, decreasing the time the system takes to transition from lower idle frequencies and small cores idle to full performance of the large cores. This resulted in significantly improved device responsiveness across a wide range of past iPhone generations.

Compared to the A12, the A13 doesn’t change all that much in terms of the time it takes to reach the maximum clock-speed of the large Lightning cores, with the CPU core reaching its peak in a little over 100ms.

What does change a lot is the time the workload resides on the smaller Thunder efficiency cores. On the A13 the small cores are ramping up significantly faster than on the A12. There’s also a major change in the scheduler behavior and when the workload migrates from the small cores to the large cores. On the A13 this now happens after around 30ms, while on the A12 this would take up to 54ms. Due to the small cores no longer being able to request higher memory controller performance states on their own, it likely makes sense to migrate to the large cores sooner now in the case of a more demanding workload.

The A13’s Lightning cores are start off at a base frequency of around 910MHz, which is a bit lower than the A12 and its base frequency of 1180MHz. What this means is that Apple has extended the dynamic range of the large cores in the A13 both towards higher performance as well as towards the lower, more efficient frequencies.

Machine Learning Inference Performance

Apple has also claimed to have increased the performance of their neural processor IP block in the A13. To use this unit, you have to make use of the CoreML framework. Unfortunately we don’t have a custom tool for testing this as of yet, so we have to fall back to one of the rare external applications out there which does provide a benchmark for this, and that’s Master Lu’s AIMark.

Like the web-browser workloads, iOS13 has brought performance improvements for past devices, so we’ve rerun the iPhone X and XS scores for proper comparisons to the new iPhone 11.

鲁大师 / Master Lu - AIMark 3 - InceptionV3 鲁大师 / Master Lu - AIMark 3 - ResNet34 鲁大师 / Master Lu - AIMark 3 - MobileNet-SSD 鲁大师 / Master Lu - AIMark 3 - DeepLabV3

The improvements for the iPhone 11 and the new A13 vary depending on the model and workload. For the classical models such as InceptionV3 and ResNet34, we’re seeing 23-29% improvements in the inference rate. MobileNet-SSD sees are more limited 17% increase, while DeepLabV3 sees a major increase of 48%.

Generally, the issue of running machine learning benchmarks is that it’s running through an abstraction layer, in this case which is CoreML. We don’t have guarantees on how much of the model is actually being run on the NPU versus the CPU and GPU, as things can differ a lot depending on the ML drivers of the device.

Nevertheless, the A13 and iPhone 11 here are very competitive and provide good iterative performance boosts for this generation.

Performance Conclusion

Overall, performance on the iPhone 11s is excellent, as we've come to expect time and time again from Apple. With that said, however, I can’t really say that I notice too much of a difference to the iPhone XS in daily usage. So while the A13 delivers class leading performance, it's probably not going to be very compelling for users coming from last year's A12 devices; the bigger impact will be felt coming from older devices. Otherwise, with this much horsepower I feel like the user experience would benefit significantly more from an option to accelerate application and system animations, or rather even just turn them off completely, in order to really feel the proper snappiness of the hardware.

SPEC2006 Perf: Desktop Levels, New Mobile Power Heights GPU Performance & Power
Comments Locked

242 Comments

View All Comments

  • Andrei Frumusanu - Monday, October 21, 2019 - link

    The ROG2 is in the charts. It's getting good scores because it's the only S855+ phone in the charts, because the Adreno 640 has extremely high ALU performance, and because the phone itself is allowed to reach much higher temperatures than the iPhones.

    The benchmark *tests* are the exact same other than being run on different APIs. What's being rendered is identical between iOS and Android is the exact same.
  • techsorz - Monday, October 21, 2019 - link

    Very nice, I think you should write that in your review. Although taking an iPhone review and then starting off with exploiting its apparent weakness on the first graph, which is the only thing most people will read, isn't very objective in my opinion. I also generally think it's better craftmanship to run benchmarks that have received updates on both devices, regardless.

    I mean, opening 3Dmark on the new iPhone literally starts it up in an iPhone 8 compatibiltiy mode. You can tell by how the UI doesn't even border the entire display. I just don't see a single compelling argument as to why you would ever pick this tool.
  • techsorz - Monday, October 21, 2019 - link

    Hi Andrei, I see that you updated the review. I apologise for my harsh tone, thank you for this discussion, I learned a lot of new info.
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    I updated absolutely nothing ...............
  • techsorz - Monday, October 21, 2019 - link

    Oh so you are here, why are you not addressing my point ?
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    What point? The UI is irrelevant, the test is offscreen.
  • techsorz - Monday, October 21, 2019 - link

    Okay, i'll just have to quote yourself then:

    " I’ve actually gone back and quickly retested the iPhone XS on iOS13 and did see a 20% increase in performance compared to what we see in the graphs here; " - Andrei Frumusanu

    And here is the knockout:

    " the workload is running on Metal and the iOS version is irrelevant in that regard." - Andrei Frumusanu

    Jesus christ, pull yourself together and fix your god damn review.

    People reading, you can make your own conclusion here.
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    There is nothing to fix and there is nothing wrong with the benchmark, you went from the test being old and broken, to talking about it throttling differently because it's older, to the UI being an issue when it's completely irrelevant. The scores are what they are because that's the performance of the chip.

    The physics test sucks on Apple because it's one weakness in their microarchitecture: https://benchmarks.ul.com/news/understanding-3dmar...
  • techsorz - Monday, October 21, 2019 - link

    Are you literally quoting an article from 2013 to prove something? I didn't go from anywhere, it IS old and broken. The score does NOT represent the throttling you would expect on updated software and certainly can NOT be graphed and compared with the Android version. It is BS that the app renders the same thing, you have literally 0 way of knowing since you didn't write the code.

    And I didn't go "herp derp, the UI is small" - I said that the app is so ancient that it literally boots in compatibility mode for the iPhone 8. And it is a real thing, go ahead and check the developer forums.

    "The scores are what they are because that's the performance of the chip." ...

    " I’ve actually gone back and quickly retested the iPhone XS on iOS13 and did see a 20% increase in performance compared to what we see in the graphs here; "

    Comeone dude, stop it.
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    The THROTTLING has nothing to do with the software version or and GPU driver updates that Apple makes to improve performance. The improved drivers on the A12 in iOS13 do NOT change the throttling % between peak performance and sustained, which is a PHYSICAL characteristic of the phone.

    The workloads renders the SAME SCENE both on Android and iOS. We work closely with Futuremark, the developer of the benchmark, along with the developers of GFXBench. If you cannot accept this you have no place reading AT as I can not do anything more to convince you of basic facts regarding the testing.

    The compatibility mode you blarb about is related to the UI resolution. It DOES NOT matter in any way for the test as it's been rendered off-screen in our suite. The performance results DO NOT CHANGE.

    I am completely done with this topic.

Log in

Don't have an account? Sign up now