SPEC2006 Perf: Desktop Levels, New Mobile Power Heights

Given that the we didn’t see too many major changes in the microarchitecture of the large Lighting CPU cores, we wouldn’t expect a particularly large performance increase over the A12. However, the 6% clock increase alongside with a few percent improvement in IPC – thanks to improvements in the memory subsystems and core front-end – could, should, and does end up delivering around a 20% performance boost, which is consistent with what Apple is advertising.

I’m still falling back to SPEC2006 for the time being as I hadn’t had time to port and test 2017 for mobile devices yet – it’s something that’s in the pipeline for the near future.

In SPECint2006, the improvements in performance are relatively evenly distributed. On average we’re seeing a 17% increase in performance. The biggest gains were had in 471.omnetpp which is latency bound, and 403.gcc which puts more pressure onto the caches; these tests saw respective increases of 25 and 24%, which is quite significant.

The 456.hmmer score increases are the lowest at 9%. That workload is highly execution backend-bound, and, given that the Lightning cores didn’t see much changes in that regard, we’re mostly seeing minor IPC increases here along with the 6% increase in clock.

While the performance figures are quite straightforward and not revealing anything surprising, the power and efficiency figures on the other hand are extremely unexpected. In virtually all of the SPECint2006 tests, Apple has gone and increased the peak power draw of the A13 SoC; and so in many cases we’re almost 1W above the A12. Here at peak performance it seems the power increase was greater than the performance increase, and that’s why in almost all workloads the A13 ends up as less efficient than the A12.

In the SPECfp2006 workloads, we’re seeing a similar story. The performance increases by the A13 are respectable and average at 19% for the suite, with individual increases between 14 and 25%.

The total power use is quite alarming here, as we’re exceeding 5W for many workloads. In 470.lbm the chip went even higher, averaging 6.27W. If I had not been actively cooling the phone and purposefully attempting it not to throttle, it would be impossible for the chip to maintain this performance for prolonged periods.

Here we saw a few workloads that were more kind in terms of efficiency, so while power consumption is still notably increased, it’s more linear with performance. However in others, we’re still seeing an efficiency regression.

Above is a more detailed historical overview of performance across the SPEC workloads and our past tested SoCs. We’ve now included the latest high-end desktop CPUs as well to give context as to where the mobile is at in terms of absolute performance.

Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we’re not expecting any proper competition for at least another 2-3 years, and Apple isn’t standing still either.

Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.

In terms of power and efficiency, the A13 seemingly wasn’t a very successful iteration for Apple, at least when it comes to the efficiency at the chip’s peak performance state. The higher power draw should mean that the SoC and phone will be more prone to throttling and sensitive to temperatures.


This is the A12, not A13

One possible explanation for the quite shocking power figures is that for the A13, Apple is riding the far end of the frequency/voltage curve at the peak frequencies of the new Lightning cores. In the above graph we have an estimated power curve for last year’s A12 – here we can see that Apple is very conservative with voltage up until to the last few hundred MHz. It’s possible that for the A13 Apple was even more aggressive in the later frequency states.

The good news about such a hypothesis is that the A13, on average and in daily workloads, should be operating at significantly more efficient operating points. Apple’s marketing materials describe the A13 as being 20% faster along with also stating that it uses 30% less power than the A12, which unfortunately is phrased in a deceiving (or at least unclear) manner. While we suspect that a lot of people will interpret it to mean that A13 is 20% faster while simultaneously using 30% less power, it’s actually either one or the other. In effect what this means is that at the performance point equivalent to the peak performance of the A12, the A13 would use 30% less power. Given the steepness of Apple’s power curves, I can easily imagine this to be accurate.

Nevertheless, I do question why Apple decided to be so aggressive in terms of power this generation. The N7P process node used in this generation didn’t bring any major improvements, so it’s possible they were in a tough spot of deciding between increasing power or making due with more meager performance increases. Whatever the reason, in the end it doesn’t cause any practical issues for the iPhone 11’s as the chip’s thermal management is top notch.

The A13's Memory Subsystem: Faster L2, More SLC BW System & ML Performance
Comments Locked

242 Comments

View All Comments

  • Andrei Frumusanu - Monday, October 21, 2019 - link

    The ROG2 is in the charts. It's getting good scores because it's the only S855+ phone in the charts, because the Adreno 640 has extremely high ALU performance, and because the phone itself is allowed to reach much higher temperatures than the iPhones.

    The benchmark *tests* are the exact same other than being run on different APIs. What's being rendered is identical between iOS and Android is the exact same.
  • techsorz - Monday, October 21, 2019 - link

    Very nice, I think you should write that in your review. Although taking an iPhone review and then starting off with exploiting its apparent weakness on the first graph, which is the only thing most people will read, isn't very objective in my opinion. I also generally think it's better craftmanship to run benchmarks that have received updates on both devices, regardless.

    I mean, opening 3Dmark on the new iPhone literally starts it up in an iPhone 8 compatibiltiy mode. You can tell by how the UI doesn't even border the entire display. I just don't see a single compelling argument as to why you would ever pick this tool.
  • techsorz - Monday, October 21, 2019 - link

    Hi Andrei, I see that you updated the review. I apologise for my harsh tone, thank you for this discussion, I learned a lot of new info.
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    I updated absolutely nothing ...............
  • techsorz - Monday, October 21, 2019 - link

    Oh so you are here, why are you not addressing my point ?
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    What point? The UI is irrelevant, the test is offscreen.
  • techsorz - Monday, October 21, 2019 - link

    Okay, i'll just have to quote yourself then:

    " I’ve actually gone back and quickly retested the iPhone XS on iOS13 and did see a 20% increase in performance compared to what we see in the graphs here; " - Andrei Frumusanu

    And here is the knockout:

    " the workload is running on Metal and the iOS version is irrelevant in that regard." - Andrei Frumusanu

    Jesus christ, pull yourself together and fix your god damn review.

    People reading, you can make your own conclusion here.
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    There is nothing to fix and there is nothing wrong with the benchmark, you went from the test being old and broken, to talking about it throttling differently because it's older, to the UI being an issue when it's completely irrelevant. The scores are what they are because that's the performance of the chip.

    The physics test sucks on Apple because it's one weakness in their microarchitecture: https://benchmarks.ul.com/news/understanding-3dmar...
  • techsorz - Monday, October 21, 2019 - link

    Are you literally quoting an article from 2013 to prove something? I didn't go from anywhere, it IS old and broken. The score does NOT represent the throttling you would expect on updated software and certainly can NOT be graphed and compared with the Android version. It is BS that the app renders the same thing, you have literally 0 way of knowing since you didn't write the code.

    And I didn't go "herp derp, the UI is small" - I said that the app is so ancient that it literally boots in compatibility mode for the iPhone 8. And it is a real thing, go ahead and check the developer forums.

    "The scores are what they are because that's the performance of the chip." ...

    " I’ve actually gone back and quickly retested the iPhone XS on iOS13 and did see a 20% increase in performance compared to what we see in the graphs here; "

    Comeone dude, stop it.
  • Andrei Frumusanu - Monday, October 21, 2019 - link

    The THROTTLING has nothing to do with the software version or and GPU driver updates that Apple makes to improve performance. The improved drivers on the A12 in iOS13 do NOT change the throttling % between peak performance and sustained, which is a PHYSICAL characteristic of the phone.

    The workloads renders the SAME SCENE both on Android and iOS. We work closely with Futuremark, the developer of the benchmark, along with the developers of GFXBench. If you cannot accept this you have no place reading AT as I can not do anything more to convince you of basic facts regarding the testing.

    The compatibility mode you blarb about is related to the UI resolution. It DOES NOT matter in any way for the test as it's been rendered off-screen in our suite. The performance results DO NOT CHANGE.

    I am completely done with this topic.

Log in

Don't have an account? Sign up now