SPEC2006 Performance: Reaching Desktop Levels

It’s been a while now since we attempted SPEC on an iOS device – for various reasons we weren’t able to continue with that over the last few years. I know a lot of people were looking forward to us picking back up from where we left, and I’m happy to share that I’ve spent some time getting a full SPEC2006 harness back to work.

SPEC2006 is an important industry standard benchmark and differentiates itself from other workloads in that the datasets that it works on are significantly larger and more complex. While GeekBench 4 has established itself as a popular benchmark in the industry – and I do praise on the efforts on having a full cross-platform benchmark – one does have to take into account that it’s still relatively on the light side in terms of program sizes and the data sizes of its workloads. As such, SPEC2006 is much better as a representative benchmark that fully exhibits more details of a given microarchitecture, especially in regards to the memory subsystem performance.

The following SPEC figures are declared as estimates, as they were not submitted and officially validated by SPEC. The benchmark libraries were compiled with the following settings:

  • Android: Toolchain: NDK r16 LLVM compiler, Flags: -Ofast, -mcpu=cortex-A53
  • iOS: Toolchain: Xcode 10, Flags: -Ofast

On iOS, 429.mcf was a problem case as the kernel memory allocator generally refuses to allocate the single large 1.8GB chunk that the program requires (even on the new 4GB iPhones). I’ve modified the benchmark to use only half the amount of arcs, thus roughly reducing the memory footprint to ~1GB. The reduction in runtime has been measured on several platforms and I’ve applied a similar scaling factor to the iOS score – which I estimate to being +-5% accurate. The remaining workloads were manually verified and validated for correct execution.

The performance measurement was run in a synthetic environment (read: bench fan cooling the phones) where we assured thermals wouldn’t be an issue for the 1-2 hours it takes to complete a full suite run.

In terms of data presentation, I’m following of earlier articles this year such as the Snapdragon 845 and Exynos 9810 evaluation in our Galaxy S9 review.

When measuring performance and efficiency, it’s important to take three metrics into account: Evidently, the performance and runtime of a benchmark, which in the graphs below is represented on the right axis, growing from the right. Here the bigger the figures, the more performant a SoC/CPU has benchmarked. The labels represent the SPECspeed scores.

On the left axis, the bars are representing the energy usage for the given workload. The bars grow from the left, and a longer bar means more energy used by the platform. A platform is more energy efficient when the bars are shorter, meaning less energy used. The labels showcase the average power used in Watts, which is still an important secondary metric to take into account in thermally constrained devices, as well as the total energy used in Joules, which is the primary efficiency metric.

The data is ordered as in the legend, and colour coded by different SoC vendor as well as shaded by the different generations. I’ve kept the data to the Apple A12, A11, Exynos 9810 (at 2.7 and 2.3GHz), Exynos 8895, Snapdragon 845 and Snapdragon 835. This gives us an overview of all relevant CPU microarchitectures over the last two years.

Starting off with the SPECint2006 workloads:

The A12 clocks in at 5% higher than the A11 in most workloads, however we have to keep in mind we can’t really lock the frequencies on iOS devices, so this is just an assumption of the runtime clocks during the benchmarks. In SPECint2006, the A12 performed an average of 24% better than the A11.

The smallest increases are seen in 456.hmmer and 464.h264ref – both of these tests are the two most execution bottlenecked tests in the suite. As the A12 seemingly did not really have any major changes in this regard, the small increase can be mainly attributed to the higher frequency as well as the improvements in the cache hierarchy.

The improvements in 445.gobmk are quite large at 27% - the characteristics of the workload here are bottlenecks in the store address events as well as branch mispredictions. I did measure that the A12 had some major change in the way stores across cache lines were handled, as I’m not seeing significant changes in the branch predictor accuracy.

403.gcc partly, and most valid for 429.mcf, 471.omnetpp, 473.Astar and 483.xalancbmk are sensible to the memory subsystem and this is where the A12 just has astounding performance gains from 30 to 42%. It’s clear that the new cache hierarchy and memory subsystem has greatly paid off here as Apple was able to pull off one of the most major performance jumps in recent generations.

When looking at power efficiency – overall the A12 has improved by 12% - but we have to remember that we’re talking about 12% less energy at peak performance. The A12 showcasing 24% better performance means were comparing two very different points at the performance/power curve of the two SoCs.

In the benchmarks where the performance gains were the largest – the aforementioned memory limited workloads – we saw power consumption rise quite significantly. So even though 7nm promised power gains, Apple's opted to spend more energy than what the new process node has saved, so average power across the totality of SPECint2006 did go up from ~3.36W on the A11 to 3.64W on the A12.

Moving on to SPECfp2006, we are looking at the C and C++ benchmarks, as we have no Fortran compiler in XCode, and it is incredibly complicated to get one working for Android as it’s not part of the NDK, which has a deprecated version of GCC.

SPECfp2006 has a lot more tests that are very memory intensive – out of the 7 tests, only 444.namd, 447.dealII, and 453.povray don’t see major performance regressions if the memory subsystem isn’t up to par.

Of course this majorly favours the A12, as the average gain for SPECfp is 28%. 433.milc here absolutely stands out with a massive 75% gain in performance. The benchmark is characterised by being instruction store limited – again part of the Vortex µarch that I saw a great improvement in. The same analysis applies to 450.soplex – a combination of the superior cache hierarchy and memory store performance greatly improves the perf by 42%.

470.lbm is an interesting workload for the Apple CPUs as they showcase multi-factor performance advantages over competing Arm and Samsung cores. Qualcomm’s Snapdragon 820 Kryo CPU oddly enough still outperforms the recent Android SoCs. 470.lbm is characterised by extremely large loops in the hottest piece of code. Microarchitectures can optimise such workloads by having (larger) instruction loop buffers, where on a loop iteration the core would bypass the decode stages and fetch the instructions from the buffer. It seems that Apple’s microarchitecture has some kind of such a mechanism. The other explanation is also the vector execution performance of the Apple cores – lbm’s hot loop makes heavy use of SIMD, and Apple’s 3x execution throughput advantage is also likely a heavy contributor to the performance.

Similar to SPECint, the SPECfp workload which saw the biggest performance jumps also saw an increase in their power consumption. 433.milc saw an increase from 2.7W to 4.2W, again with a 75% performance increase.

Overall the power consumption has seen a jump from 3.65W up to 4.27W. The overall energy efficiency has increased in all tests but 482.sphinx3, where the power increase hit the maximum across all SPEC workloads for the A12 at 5.35W. The total energy used for SPECfp2006 for the A12 is 10% lower than the A11.

I didn’t have time to go back and measure the power for the A10 and A9, but generally they’re in line around 3W for SPEC. I did run the performance benchmarks, and here’s an aggregate performance overview of the A9 through to the A12 along with the most recent Android SoCs, for those who are looking into comparing past Apple generations.

Overall the new A12 Vortex cores and the architectural improvements on the SoC’s memory subsystem give Apple’s new piece of silicon a much higher performance advantage than Apple’s marketing materials promote. The contrast to the best Android SoCs have to offer is extremely stark – both in terms of performance as well as in power efficiency. Apple’s SoCs have better energy efficiency than all recent Android SoCs while having a nearly 2x performance advantage. I wouldn’t be surprised that if we were to normalise for energy used, Apple would have a 3x performance lead.

This also gives us a great piece of context for Samsung’s M3 core, which was released this year: the argument that higher power consumption brings higher performance only makes sense when the total energy is kept within check. Here the Exynos 9810 uses twice the energy over last year’s A11 – at a 55% performance deficit.

Meanwhile Arm’s Cortex A76 is scheduled to arrive inside the Kirin 980 as part of the Huawei Mate 20 in just a couple of weeks – and I’ll be making sure we’re giving the new flagship a proper examination and placing among current SoCs in our performance and efficiency graph.

What is quite astonishing, is just how close Apple’s A11 and A12 are to current desktop CPUs. I haven’t had the opportunity to run things in a more comparable manner, but taking our server editor, Johan De Gelas’ recent figures from earlier this summer, we see that the A12 outperforms a moderately-clocked Skylake CPU in single-threaded performance. Of course there’s compiler considerations and various frequency concerns to take into account, but still we’re now talking about very small margins until Apple’s mobile SoCs outperform the fastest desktop CPUs in terms of ST performance. It will be interesting to get more accurate figures on this topic later on in the coming months.

The A12 Vortex CPU µarch: Massive Memory Improvements The A12 Tempest CPU & NN Performance Tests
Comments Locked

253 Comments

View All Comments

  • name99 - Saturday, October 6, 2018 - link

    If you're going to count like that, you need to throw in at least 7 Chinook cores (small 64-bit Apple-designed cores that act as controllers for various large blocks like the GPU or NPU).
    [A Chinook is a type of non-Vortex wind, just like a Zephyr, Tempest, or Mistral...]

    There's nothing that screams their existence on the die shots, but what little we know about them has been established by looking at the OS binaries for the new iPhones. Presumably if they really are minimal and require little to no L2 and smaller L1s (ie regular memory structures that are more easily visible), they could look like pretty much any of that vast sea of unexplained territory on the die.

    It's unclear what these do today apart from the obvious tasks like initialization and power tracking. (On the GPU it handles thread scheduling.)
    Even more interesting is what Apple's long term game here is? To move as much of the OS as possible off the main cores onto these auxiliary cores (and so the wheel of reincarnation spins back to System/360 Channel Controllers?) For example (in principle...) you could run the file system stack on the flash controller, or the network on a controller living near the radio basebands, and have the OS communicate with them at a much more abstract level.

    Does this make sense for power, performance, or security reasons? Not a clue! But in a world where transistors are cheap, I'm glad to see Apple slowly rethinking OS and system design decisions that were basically made in the early 1970s, and that we've stuck with ever since then regardless of tech changes.
  • ex2bot - Sunday, October 7, 2018 - link

    Much appreciate the review!
  • s.yu - Monday, October 8, 2018 - link

    Great job as always Andrei!
    I would only have hoped for a more thorough exploration of the limits of the portrait mode, to see if Apple really makes proper use of the depth map, taking a photo in portrait mode in a tunnel to see if the amount of blur is applied according to distance for example.
  • Mic_whos_right - Tuesday, October 9, 2018 - link

    Thanks for this comment--Now I know why nothing last year. Great Anandtech standard of a review! Always above my intellect of understanding w/ info overload that teaches me a lot of the product.
  • Moh Qadee - Thursday, November 1, 2018 - link

    Thank you for this great detailed review. I have been coming back to this review before making a purchase. Please make an comparison article of Iphone XS gaming vs other smartphones in market. How much does thermal make difference over longer periods before it starts to throttle or heat up. Would be able to give an approx time before you noticed heat while gaming on XS? I don't mind investing in an expensive phone as long as thermals doesn't limit the performance. There are phones like Razor 2 or Rog out. People make an comparison with an iPhone as it doesn't require much cooling. I wonder if gaming for above 20+ mins makes it challenging for Iphone to heat up enough that you should be worried about?
  • Ahadjisavvas - Monday, November 19, 2018 - link

    And the exynos m3 had 12 execution ports right? Can you elaborate on the major differences between the design of the vortex core in the a12 and the meerkat core in the m3? I would deeply appreciate it if you could.
  • Ahadjisavvas - Monday, November 19, 2018 - link

    And the exynos m3 has 12 execution ports right? Can you elaborate on the main differences between the design of the exynos m3 and the vortex core,that'll be really helpful and informative as well. Also,are you planning on writing a piece about the a12x soc, it'll be really interesting to hear how far apple has come with the soc on the 2018 ipad pro.
  • alysdexia - Monday, May 13, 2019 - link

    "Now what is very interesting here is that this essentially looks identical to Apple’s Swift microarchitecture from Apple's A6 SoC."
    This comparison doesn't make sense and it seems like you took the same execution ports to determine whether the chips are identical, when the ports could be arbitrary for each release. Rather I took the specifications (feature size, revision) and dates of each from these pages: https://en.wikipedia.org/wiki/Comparison_of_ARMv7-... and https://en.wikipedia.org/wiki/Comparison_of_ARMv8-... to come up with these matches: Cortex-A15-A9 A6, Cortex-A15-A9 A6X, Cortex-A57-A53 A7, Cortex-A57-A53 A8, Cortex-A57-A53 A8X, Cortex-A57-A53 A9, Cortex-A57-A53 A9X, Cortex-A73 A10, Cortex-A73 A10X, Cortex-A75 A11, Cortex-A76 A12, Cortex-A76 A12X. For exemplum 5 execution ports could be gotten (I'm no computer engineer so this is a SWAG.) from the 3 in Cortex-A9 subtracted from the 8 in Cortex-A15 but the later big.LITTLEs with 9 and 5 ports could be split from 7 or 8 as (7+2)+(7−2) or (8+1)+(8−3). You need to correct the Anandtech and Wikipedia pages.

    faster -> swifter, swiftlier
    ISO -> iso -> idem
    's !-> they; 1 != 2
    great:small::big:lite::mickel:littel
  • RSAUser - Friday, October 5, 2018 - link

    I still don't like iOS tendency towards warmer photos than it is irl.
  • DERSS - Saturday, October 6, 2018 - link

    It is weird because it is warmer in bright light and bleaker in dim light.
    Why can not they just even it out, make the photos less yellowish in bright light and less bleak in dim light?

Log in

Don't have an account? Sign up now