SPEC2006 Perf: Desktop Levels, New Mobile Power Heights

Given that the we didn’t see too many major changes in the microarchitecture of the large Lighting CPU cores, we wouldn’t expect a particularly large performance increase over the A12. However, the 6% clock increase alongside with a few percent improvement in IPC – thanks to improvements in the memory subsystems and core front-end – could, should, and does end up delivering around a 20% performance boost, which is consistent with what Apple is advertising.

I’m still falling back to SPEC2006 for the time being as I hadn’t had time to port and test 2017 for mobile devices yet – it’s something that’s in the pipeline for the near future.

In SPECint2006, the improvements in performance are relatively evenly distributed. On average we’re seeing a 17% increase in performance. The biggest gains were had in 471.omnetpp which is latency bound, and 403.gcc which puts more pressure onto the caches; these tests saw respective increases of 25 and 24%, which is quite significant.

The 456.hmmer score increases are the lowest at 9%. That workload is highly execution backend-bound, and, given that the Lightning cores didn’t see much changes in that regard, we’re mostly seeing minor IPC increases here along with the 6% increase in clock.

While the performance figures are quite straightforward and not revealing anything surprising, the power and efficiency figures on the other hand are extremely unexpected. In virtually all of the SPECint2006 tests, Apple has gone and increased the peak power draw of the A13 SoC; and so in many cases we’re almost 1W above the A12. Here at peak performance it seems the power increase was greater than the performance increase, and that’s why in almost all workloads the A13 ends up as less efficient than the A12.

In the SPECfp2006 workloads, we’re seeing a similar story. The performance increases by the A13 are respectable and average at 19% for the suite, with individual increases between 14 and 25%.

The total power use is quite alarming here, as we’re exceeding 5W for many workloads. In 470.lbm the chip went even higher, averaging 6.27W. If I had not been actively cooling the phone and purposefully attempting it not to throttle, it would be impossible for the chip to maintain this performance for prolonged periods.

Here we saw a few workloads that were more kind in terms of efficiency, so while power consumption is still notably increased, it’s more linear with performance. However in others, we’re still seeing an efficiency regression.

Above is a more detailed historical overview of performance across the SPEC workloads and our past tested SoCs. We’ve now included the latest high-end desktop CPUs as well to give context as to where the mobile is at in terms of absolute performance.

Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we’re not expecting any proper competition for at least another 2-3 years, and Apple isn’t standing still either.

Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.

In terms of power and efficiency, the A13 seemingly wasn’t a very successful iteration for Apple, at least when it comes to the efficiency at the chip’s peak performance state. The higher power draw should mean that the SoC and phone will be more prone to throttling and sensitive to temperatures.


This is the A12, not A13

One possible explanation for the quite shocking power figures is that for the A13, Apple is riding the far end of the frequency/voltage curve at the peak frequencies of the new Lightning cores. In the above graph we have an estimated power curve for last year’s A12 – here we can see that Apple is very conservative with voltage up until to the last few hundred MHz. It’s possible that for the A13 Apple was even more aggressive in the later frequency states.

The good news about such a hypothesis is that the A13, on average and in daily workloads, should be operating at significantly more efficient operating points. Apple’s marketing materials describe the A13 as being 20% faster along with also stating that it uses 30% less power than the A12, which unfortunately is phrased in a deceiving (or at least unclear) manner. While we suspect that a lot of people will interpret it to mean that A13 is 20% faster while simultaneously using 30% less power, it’s actually either one or the other. In effect what this means is that at the performance point equivalent to the peak performance of the A12, the A13 would use 30% less power. Given the steepness of Apple’s power curves, I can easily imagine this to be accurate.

Nevertheless, I do question why Apple decided to be so aggressive in terms of power this generation. The N7P process node used in this generation didn’t bring any major improvements, so it’s possible they were in a tough spot of deciding between increasing power or making due with more meager performance increases. Whatever the reason, in the end it doesn’t cause any practical issues for the iPhone 11’s as the chip’s thermal management is top notch.

The A13's Memory Subsystem: Faster L2, More SLC BW System & ML Performance
Comments Locked

242 Comments

View All Comments

  • ksec - Wednesday, October 16, 2019 - link

    Well At least one article got it right on Anandtech. Last time Anton refers Apple A13 as 7nm EUV, along with an TSMC update mentioning it as well. Now we can finally settle Apple is using N7P.

    I think one of the reason for the higher power usage is the A13 was designed with 7nm EUV in mind, and was only later changed to N7P as 7nm EUV may not provide enough capacity for Apple.

    I do wonder if we are going to see 5nm Apple SoC next year.
  • Ian Cutress - Wednesday, October 16, 2019 - link

    Anton didn't realise 'N7P' and 'N7+ with EUV' were different processes - and that people were using P instead of + for some unknown reason. Up until a while ago, any time someone said 'second generation N7', it was always thought of as N7+. Now we're making them clarify.
  • FreckledTrout - Wednesday, October 16, 2019 - link

    It's TSMC's alphabet soup so who can blame him. They could make the node names a bit easier for people whom are not CPU process nerds.
  • dennphill - Wednesday, October 16, 2019 - link

    Apple! They just give you what THEY want to! Pretty disappointed in all their products. I finally gave in and got rid of my Goggle thinggie (worthless, really) and got a reconditioned iPhone 8+ a coouple of months ago. It is really (so here's my technical user review: "It's just...") OK. My wife, OTOH, has an SE - her second - and she really doesn't want anything bigger. EVER! (Listen to that, Apple!) Even the reported new 'tiny' 2020 5.4" iPhone is what she calls HUGE. Weekend project is replacing her iPhone 5 SE battery that's begun not to keep a charge.
  • peevee - Wednesday, October 16, 2019 - link

    Make your bets: will Apple switch to A13X in their MacBooks... Seems would be very prudent with an 8-core implementation.
  • solipsism - Wednesday, October 16, 2019 - link

    I have no doubt Apple will replace Intel with an ARM SoC, but I do not expect it to be beefed up A-series chip found in an iPhone. Just like all their in-house designed chips, I would expect it to be its own letter designation with considerably more memory bandwidth, access to PCIe, and other features that work well in a lower-end portables and desktops.
  • Diogene7 - Wednesday, October 16, 2019 - link

    I hope Apple intend to keep their future laptop ARM CPU low power enough to not require any active cooling : a fanless ARM mac laptop is well executed could have a lot of appeal to me !!!

    That the reason why at the moment, I am keeping an eye on Samsung Galaxy Book S / Microsoft Surface Pro X that are the 2 first fanless computers with ARM Qualcomm 8cx : if responsiveness is good enough (especially for web browsing, watching video, Microsoft office,...) then it would prove it is possible.

    From there, I would be very happy that Apple do an always connected fanless Macbook Air with an ARM CPU, as otherwise I am considering buying a Qualcomm 8cx fanless laptop (if the reviews confirms that the performance are reasonable enough)
  • joms_us - Wednesday, October 16, 2019 - link

    I like to have this as well, approx. 1 day battery able to do media consumption/productivity. At least the comparison so far is that it is better if not than Intel Core i5-8250U
  • Quantumz0d - Thursday, October 17, 2019 - link

    This mentality shift that Apple caused in consumers is the reason why Laptops got rolled over into thin anf light disposable garbage.

    All Intel BGA / AMD (both PGA and BGA) all have castrated TDP limits which forces throttling. And Apple shoved an 6C/8T into which they cant cool because they tried to cheat physics. And still after VRM fiasco they still put the 8C/16T into that anemic chassis people who are buying them deserve to be robbed off th e 60% drop in performance.

    Dell shoved XPS with same i9 and it got rekted down. Surface ARM cannot run 64bit x86 code and iy can run the UWP garbage only.

    Next the BGA hellhole that Apple dragged everyone into. First x86 macs had BGA processors. Seeing that Intel stopped PGA and started Ultrabook BS and killed off all MX and XM PGA from Haswell making BGA with cTDP mandatory and bins locked down. My Haswell CPU in Notebook maintains consistent 700 in CB R15. Not even 7700HQ does it when put in a TDP locked JUNK.

    And next Soldered SSD in Macs paired with T2 chip. Anything goes bad Go to Apple. Battery soldered to Chassis. KB inherent design flaw. This stupid corporate only knows how to fleece and put junk and forced all companies passively into this thin and light garbage.

    Forget PCIE NVME SSDs in RAID rhat chassis will melt and GPUs don't exist in Apple lala land because Muh AR ML Kit and all.

    ARM cannot do Transcoding work. Period. And Image Processing like Autodesk won't run on garbage ARM GPUs need Nvidia chips with High BW GDDR6 or G5X. Apple cannot cool them in that chassis. They can't escape Physics. End of Story of this A series competing with x86 ISA and Intel/AMD.
  • Diogene7 - Thursday, October 17, 2019 - link

    @joms_us : Yes, for applications compiled to work natively on ARM64, the performance of the app running a Windows computer with Qualcomm 8cx should be similar as the same x86-32bits app running on a Windows computer with Intel Core i5-8250U, but with the advantage of having much better battery life which is appealing to me as well :).

    Really looking forward to read reviews on Samsung Galaxy Book S and Microsoft Surface Pro X to see if they hold to the hype...

Log in

Don't have an account? Sign up now