SPEC2006 Perf: Desktop Levels, New Mobile Power Heights

Given that the we didn’t see too many major changes in the microarchitecture of the large Lighting CPU cores, we wouldn’t expect a particularly large performance increase over the A12. However, the 6% clock increase alongside with a few percent improvement in IPC – thanks to improvements in the memory subsystems and core front-end – could, should, and does end up delivering around a 20% performance boost, which is consistent with what Apple is advertising.

I’m still falling back to SPEC2006 for the time being as I hadn’t had time to port and test 2017 for mobile devices yet – it’s something that’s in the pipeline for the near future.

In SPECint2006, the improvements in performance are relatively evenly distributed. On average we’re seeing a 17% increase in performance. The biggest gains were had in 471.omnetpp which is latency bound, and 403.gcc which puts more pressure onto the caches; these tests saw respective increases of 25 and 24%, which is quite significant.

The 456.hmmer score increases are the lowest at 9%. That workload is highly execution backend-bound, and, given that the Lightning cores didn’t see much changes in that regard, we’re mostly seeing minor IPC increases here along with the 6% increase in clock.

While the performance figures are quite straightforward and not revealing anything surprising, the power and efficiency figures on the other hand are extremely unexpected. In virtually all of the SPECint2006 tests, Apple has gone and increased the peak power draw of the A13 SoC; and so in many cases we’re almost 1W above the A12. Here at peak performance it seems the power increase was greater than the performance increase, and that’s why in almost all workloads the A13 ends up as less efficient than the A12.

In the SPECfp2006 workloads, we’re seeing a similar story. The performance increases by the A13 are respectable and average at 19% for the suite, with individual increases between 14 and 25%.

The total power use is quite alarming here, as we’re exceeding 5W for many workloads. In 470.lbm the chip went even higher, averaging 6.27W. If I had not been actively cooling the phone and purposefully attempting it not to throttle, it would be impossible for the chip to maintain this performance for prolonged periods.

Here we saw a few workloads that were more kind in terms of efficiency, so while power consumption is still notably increased, it’s more linear with performance. However in others, we’re still seeing an efficiency regression.

Above is a more detailed historical overview of performance across the SPEC workloads and our past tested SoCs. We’ve now included the latest high-end desktop CPUs as well to give context as to where the mobile is at in terms of absolute performance.

Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we’re not expecting any proper competition for at least another 2-3 years, and Apple isn’t standing still either.

Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.

In terms of power and efficiency, the A13 seemingly wasn’t a very successful iteration for Apple, at least when it comes to the efficiency at the chip’s peak performance state. The higher power draw should mean that the SoC and phone will be more prone to throttling and sensitive to temperatures.


This is the A12, not A13

One possible explanation for the quite shocking power figures is that for the A13, Apple is riding the far end of the frequency/voltage curve at the peak frequencies of the new Lightning cores. In the above graph we have an estimated power curve for last year’s A12 – here we can see that Apple is very conservative with voltage up until to the last few hundred MHz. It’s possible that for the A13 Apple was even more aggressive in the later frequency states.

The good news about such a hypothesis is that the A13, on average and in daily workloads, should be operating at significantly more efficient operating points. Apple’s marketing materials describe the A13 as being 20% faster along with also stating that it uses 30% less power than the A12, which unfortunately is phrased in a deceiving (or at least unclear) manner. While we suspect that a lot of people will interpret it to mean that A13 is 20% faster while simultaneously using 30% less power, it’s actually either one or the other. In effect what this means is that at the performance point equivalent to the peak performance of the A12, the A13 would use 30% less power. Given the steepness of Apple’s power curves, I can easily imagine this to be accurate.

Nevertheless, I do question why Apple decided to be so aggressive in terms of power this generation. The N7P process node used in this generation didn’t bring any major improvements, so it’s possible they were in a tough spot of deciding between increasing power or making due with more meager performance increases. Whatever the reason, in the end it doesn’t cause any practical issues for the iPhone 11’s as the chip’s thermal management is top notch.

The A13's Memory Subsystem: Faster L2, More SLC BW System & ML Performance
Comments Locked

242 Comments

View All Comments

  • Irish910 - Friday, October 18, 2019 - link

    Why so salty? If you hate Apple so much why are you here reading this article? Sounds like you’re insecure with your android phone which basically gets mopped up with by the new iPhones in every area where it counts. Shoo shoo now.
  • shompa - Thursday, October 17, 2019 - link

    Desktop performance. Do you understand the difference between CPU performance and App performance? X86 has never had the fastest CPUs. They had windows and was good enough / cheaper than RISC stuff. The reason why for example Adobe is "faster" in X86 is that Intel adds more and more specific instructions AVX/AVX512 to halt competition. Adobe/MSFT are lazy companies and don't recompile stuff for other architectures.
    For example when DVD encoding was invented in 2001 by Pioneer/Apple DVD-R. I bought a 10K PC with the fastest CPU there was. Graphics, SCSI disks and so on. Doing a MPEG 2 encoding took 15 hours. My first mac was a 667mhz PowerBook. The same encoding took 90 minutes. No. G4 was not 10 times faster, it was ALTIVEC that intel introduced as AVX when Apple switched to Intel. X86 dont even have real 64bit and therefore the 32bit parts in the CPU cant be removed. X86 is the only computer system where 64bit code runs slower than 32bit (about 3%). All other real 64bit systems gained 30-50% in speed. And its not about memory like PC clickers belive. Intel/ARM and others had 38bit memory addressing. That is 64gig / with a 4gig limit per app. Still, today: how many apps use more than 4gig memory? RISC went 64bit in 1990. Sun went 64bit / with 64bit OS in 1997. Apple went 64bit in 2002. Windows went 64bit after Playstation4/XboxOne started to release 64bit games.

    By controlling the OS and hardware companies can optimize OS and software. That is why Apple/Google and MSFT are starting to use own SoCs. And its better for customers. There are no reason a better X86 chip cost 400 dollars + motherboard tax 100 dollars. Intel 4 core CPUs 14nm cost less than 6 dollars to produce. The problem is customers: they are prepared to pay more for IntelInside and its based on the wrong notion "its faster". The faster MSFT moves to ARM / RISCV. The better. And if the rumors are right, Samsung is moving to RISCV. That would shake up the mobile market.
  • Quantumz0d - Thursday, October 17, 2019 - link

    Samsung just killed Texas team funding. And you don't want to pay for a socketed board and industry standard but rather have a surfacw which runs on an off the shelf processor and has small workload target in a PC ?

    Also dude from where you are pulling this $6 of Intel CPUs and I presume you already know how the R&D works right in Lithography ? ROI pays off once the momentum has began. So you are frustrated of 4C8T Intel monopoly amd want some magical unicorn out of thin air which is as fast that and is cheap and is portable a.k.a Soldered. Intel stagnated because of no competition. Now AMD came with better pricing and more bang for buck.

    Next from Bigroom Mainframes to pocket PC (unfortunate with iOS its not because of no Filesystem anf Google following same path of Scoped Storage) microsoft put computers in homes and now they recently started moving away into SaaS and DaaS bs. And now with thin client dream of yours Itll be detrimental to the Computer HW owners or who want to own.

    We do not want a Propreitary own walled gardens with orwellian drama like iOS. We need more Linux and more powerful and robust OS like Windows which handles customization despite getting sandbagged by M$ on removing control panel slowly and migrating away from Win32. Nobody wants that.

    https://www.computerworld.com/article/3444606/with...
  • jv007 - Wednesday, October 16, 2019 - link

    The lighting big cores are not very impressive this time.
    From 4 Watt to 5 Watt a 25% increase in power for 17% more performance.
    Good for benchmarks (and the phone was actively cooled here), but not good for throttling.
    7nm and no EUV, maybe next year with 5nm and EUV will improve seriously.
    I wonder if we will see a A13X.
  • name99 - Wednesday, October 16, 2019 - link

    "The lighting big cores are not very impressive this time"

    A PHONE core that matches the best Intel has to offer is "not impressive"?
    OK then...
  • Total Meltdowner - Thursday, October 17, 2019 - link

    Comparing this CPU to intel is silly. They run completely different instructions.
  • Quantumz0d - Sunday, October 20, 2019 - link

    It has been overblown. The Spec score is all the A series chips have. They can't replace x86 chips even Apple uses x86 cores with Linux RHEL or Free OS Linux distribution to run their services. Whole world runs on the same ISA. These people just whiteknight it like a breakthrough while the whole iOS lacks basic Filesystem access and the latest Catalina cannot run non notarized apps.

    Also to note the Apple First party premium optimization, that Apple pays for companies like Adobe. If you run MacOS / Trashbook Pro BGA / iOS on any non optimized SW it will be held back both on power consumption and all. It's just a glorified Nix OS and with the first party support it keeps floating. They missed out on the mass scale deployment like Windows or Linux and that's going to be their Achilles heel along with the more transformation of MacOS into iOS rather than opposite.

    It's really funny when you look how 60% of the performance is max that one can get from MacOS based HW/Intel machines due to severe thinning on chassis for that sweet BGA appeal and non user serviceable HW while claiming all recycled parts and all. I'm glad that Apple can't escape Physics. VRM throttling, low quality BGA engineering with cTDP garbage etc. Also people just blatantly forget how the DRAM of those x86 processors scales with more than 4000MHz of DDR4 and the PCIe lanes it pushes out with massive I/O while the anemic trash on Apple Macs is a USB C with Dongle world, ARM replicating the same esp the Wide A series with all the Uncore and PCIe I/O support ? Nope. It's not going to happen. Apple needs to invest Billions again and they are very conservative when it comes to this massive scale.

    Finally to note, ARM cannot replace x86. Period. The HPC/ DC market of the Chipzilla Intel and AMD, they won't allow for this BS, Also the ISA of x86 is so mature plus how the LGA and other sockets happen along. While ARM is stuck with BGA BS and thus they can never replace these in the Consumer market.

    Let the fanboys live in their dream utopia.
  • tipoo - Thursday, October 17, 2019 - link

    Being that the little cores are more efficient, and the battery is significantly larger, maybe they allowed a one time regresion in peak performance per watt to gain that extra performance, without a node shrink this year.
  • zeeBomb - Wednesday, October 16, 2019 - link

    the time has come.
  • joms_us - Wednesday, October 16, 2019 - link

    Show us that A13 can beat even the first gen Ryzen or Intel Skylake , run PCMark, Cinebench or any modern games otherwise this nonsense desktop level claim should go to the bin. You are using a primitive Spec app to demonstrate the IPC?

    I can't wait for Apple to ditch the Intel processor inside their MBP and replace with this SoC. Oh wait no, it won't happen in a decade because this cannot run a full pledge OS with real multi-tasking. =D

Log in

Don't have an account? Sign up now