System Performance

While synthetic test performance is one thing, and hopefully we’ve covered that well with SPEC, interactive performance in real use-cases behaves differently, and here software can play a major role in terms of the perceived performance.

I will openly admit that our iOS system performance suite looks extremely meager: we are only really left with our web browser tests, as iOS is quite lacking in meaningful alternatives such as to PCMark on the Android side.

Speedometer 2.0 - OS WebView

Speedometer 2.0 is the most up-to-date industry standard JavaScript benchmark which tests the most common and modern JS framework performance.

The A12 sports a massive jump of 31% over the A11, again pointing out that Apple’s advertised performance figures are quite underselling the new chipset.

We’re also seeing a small boost from iOS 12 on the previous generation devices. Here the boost comes not only thanks to an a change in how iOS’s scheduler handles load, but also thanks to further improvements in the ever evolving JS engine that Apple uses.

WebXPRT 3 - OS WebView

WebXPRT 3 is also a browser test, however its workloads are more wide-spread and varied, containing also a lot of processing tests. Here the iPhone XS showcases a smaller 11% advantage over the iPhone X.

Former devices here also see a healthy boost in performance, with the iPhone X ticking up from 134 to 147 points, or 10%. The iPhone 7’s A10 sees a larger boost of 33%, something we’ll get into more detail in a little bit.

iOS12 Scheduler Load Ramp Analyzed

Apple promised a significant performance improvement in iOS12, thanks to the way their new scheduler is accounting for the loads from individual tasks. The operating system’s kernel scheduler tracks execution time of threads, and aggregates this into an utilisation metric which is then used by for example the DVFS mechanism. The algorithm which decides on how this load is accounted over time is generally simple a software decision – and it can be tweaked and engineered to whatever a vendor sees fit.

Because iOS’s kernel is closed source, we’re can’t really see what the changes are, however we can measure their effects. A relatively simple way to do this is to track frequency over time in a workload from idle, to full performance. I did this on a set of iPhones ranging from the 6 to the X (and XS), before and after the iOS12 system update.

Starting off with the iPhone 6 with the A8 chipset, I had some odd results on iOS11 as the scaling behaviour from idle to full performance was quite unusual. I repeated this a few times yet it still came up with the same results. The A8’s CPU’s idled at 400MHz, and remained here for 110ms until it jumped to 600MHz and then again 10ms later went on to the full 1400MHz of the cores.

iOS12 showcased a more step-wise behaviour, scaling up earlier and also reaching full performance after 90ms.

The iPhone 6S had a significantly different scaling behaviour on iOS11, and the A9 chip’s DVFS was insanely slow. Here it took a total of 435ms for the CPU to reach its maximum frequency. With the iOS12 update, this time has been massively slashed down to 80ms, giving a great boost to performance in shorter interactive workloads.

I was quite astonished to see just how slow the scheduler was before – this is currently the very same issue that is handicapping Samsung’s Exynos chipsets and maybe other Android SoCs who don’t optimise their schedulers. While the hardware performance might be there, it just doesn’t manifest itself in short interactive workloads because the scheduler load tracking algorithm is just too slow.

The A10 had similar bad characteristics as the A9, with time to full performance well exceeding 400ms. In iOS12, the iPhone 7 slashes this roughly in half, to around 210ms. It’s odd to see the A10 being more conservative in this regard compared to the A9 – but this might have something to do with the little cores.

In this graph, it’s also notable to see the frequency of the small cores Zephyr cores – they start at 400MHz and peak at 1100MHz. The frequency in the graph goes down back to 758MHz because at this point there was a core switch over to the big cores, which continue their frequency ramp up until maximum performance.

On the Apple A11 – I didn’t see any major changes, and indeed any differences could just be random noise between measuring on the different firmwares. Both in iOS11 and iOS12, the A11 scales to full frequency in about 105ms. Please note the x-axis in this graph is a lot shorter than previous graphs.

Finally on the iPhone XS’s A12 chipset, we can’t measure any pre- and post- update as the phone comes with iOS12 out of the box. Here again we see that it reaches full performance after 108ms, and we see the transition of the tread from the Tempest cores over to the Vortex cores.

Overall, I hope this is the best and clear visual representation of the performance differences that iOS12 brings to older devices.

In terms of the iPhone XS – I haven’t had any issues at all with performance of the phone and it was fast. I have to admit I’m still a daily Android user, and I use my phones with animations completely turned off as I find they get in the way of the speed of a device. There’s no way to completely turn animation off in iOS, and while this is just my subjective personal opinion, I found they are quite hampering the true performance of the phone. In workloads that are not interactive, the iPhone XS just blazed through them without any issue or concern.

The A12 Tempest CPU & NN Performance Tests GPU Performance & Power
Comments Locked

253 Comments

View All Comments

  • peevee - Monday, October 15, 2018 - link

    "we see four new smaller efficiency cores named “Mistral”. The new small cores bring some performance improvements, but it’s mostly in terms on power and power efficiency where we see Tempest make some bigger leaps"

    So, is it Tempest or Mistral? Or both?
  • Ryan Smith - Tuesday, October 23, 2018 - link

    It's Tempest. Thanks for the heads up!
  • peevee - Monday, October 15, 2018 - link

    "upgrade in sensor size from an area of 32.8mm² to 40.6mm²"

    These are not sensor sizes, these are total image chip sizes.
    Sensor (as in "sensor", the part which actually "senses" light) sizes are not hard to calculate, and are MUCH smaller.

    12MP is approx 4000x3000 pixels.
    The old sensor had 1.22 µm pixel pitch. 1.22*4=4.88mm. 1.22*3=3.66mm.
    So old sensor was 4.88x3.66mm = 17.9mm².

    The new sensor is 5.6mm x 4.2mm = 23.5mm².

    This is is comparison to

    - typical cheap P&S camera sensor size (so-called '1/2.3" type') of 6mm x 4.5mm = 27mm²
    - high-end P&S camera sensor, (1" type) of 13.2mm x 8.8mm = 116mm²
    - Four Thirds camera sensor size of 17.2 x 13mm = 225mm²
    - Modern pro camera sensor size of about 36x24mm = 864mm².

    Please do not confuse your readers by calling total image chip sizes as "sensor size".
  • peevee - Monday, October 15, 2018 - link

    "The performance measurement was run in a synthetic environment (read: bench fan cooling the phones) where we assured thermals wouldn’t be an issue for the 1-2 hours it takes to complete a full suite run."

    Which makes the whole thing useless. Of course wider (read hotter and less efficient due to higher overhead of often-useless blocks) will run faster in this environment, unlike in user hands (literally, ~36C/97F plus blanketing effect).
  • Andrei Frumusanu - Monday, October 22, 2018 - link

    It changes absolutely nothing. It will still reach that performance even in your hands. The duration of a workload is not orthogonal to its complexity.
  • viczy - Sunday, October 21, 2018 - link

    Fantastic and in-depth work! Thanks for the data and analysis. I would like to know a little more about your method for energy and power measurement. Thanks!
  • techbug - Friday, November 2, 2018 - link

    Thanks a lot Andrei.

    L2 cache latency is 8.8ns, Core clock speed is 2.5GHz, each cycle is around 0.4ns, then the l2 cache latency is 8.8ns/0.4=22 cycles. This is much longer than Skylake, which is around 12 cycles (taking i7-6700 Skylake 4.0 GHz at https://www.7-cpu.com/cpu/Skylake.html as an example, it equals to 3ns L2 cache latency).

    So L2 latency is 8.8ns versus 3ns in skylake. Is this comparison correct?

    I cannot tell the precise L1 latency from the graph "Much improved memory latency". Can you give the number?
    According to Figure 3 in https://www.spec.org/cpu2006/publications/SIGARCH-... the working set size of 80% SPEC2K6 workload is larger than 8MB, A12 's L2 cache (8MB) won't hold the working set. Compared with 32MB L3 cache Skylake configuration.

    So overall the memory hierarchy of A12 seems not comparable to Skylake. What else helps it to deliver a comparable SPEC2K6 performance?
  • demol3 - Wednesday, December 5, 2018 - link

    Will there be a comparison between XS-series and XR or XR review anytime soon?
  • tfouto - Thursday, December 27, 2018 - link

    Does XS has a true 10-bit panel, or uses Frame Rate Control?
    What about Iphone X?
  • Latiosxy - Wednesday, January 23, 2019 - link

    Hello. I just wanted to criticize the way this site works. It’s hard to read while listening to music when your intrusive ads follow my screen and interrupt my audio consistently. Please fix this as this has been really annoying. Thanks.

Log in

Don't have an account? Sign up now