GPU Performance & Power

We covered the CPUs of the A13 in detail, but there’s also the GPU we have to consider. Apple’s performance improvement claims for this year have been a little more conservative, with the company promising a 20% performance increase or a 40% decrease in power at the same performance as the A12. Last year’s jump was a rather large one, and we don’t expect Apple (or any vendor for that matter) to repeat it any time soon, especially as we saw both major microarchitectural changes as well as the adoption of the new 7nm manufacturing node at the same time.

Beyond the raw performance of the chipset and the GPU, what’s important for gaming is the actual device’s thermal characteristics and how it’s able to dissipate and sustain the high heat generation of the SoC. For the A12 I did criticize Apple in terms of being extremely aggressive on the peak power that the phones were allowed to start off with in 3D workloads. This resulted in the phones not really able to sustain these performance levels more than 2-3 minutes before having to throttle down.

This year beyond the promised efficiency gains, Apple has said they’ve improved the device’s SoC cooling capabilities, being able to better spread the heat from the SoC to the body of the phone and as such allow the silicon to retain higher performance states.

3DMark Sling Shot 3.1 Extreme Unlimited - Physics

Starting off with the physics test in 3DMark, this is actually more of a CPU workload when power constrained during a GPU workload. In this scenario, the iPhone 11’s fare a bit better in terms of peak performance compared to last year’s iPhones, however they weren’t quite able to maintain the same sustained performance as we saw on the A12 iPhones.

The iPhone 11 Pro Max showcased the better scores than its siblings, and that’s not too much of a surprise given that the phone has the biggest form-factor and thermal envelope to be able to dissipate larger amounts of heat.

3DMark Sling Shot 3.1 Extreme Unlimited - Graphics

Switching over to the graphics workload which puts a maximum amount of stress on the GPU, we here now see major changes in the scores and rankings. First of all, the new iPhone 11s and the A13 now showcase significant performance increases compared to the A12 devices last year. I’ve noted that Apple was oddly weak in 3DMark when we analyzed the chip, and it looks like Apple was able to resolve whatever the bottleneck was this generation, showcasing a 38% increase in performance. I’ve actually gone back and quickly retested the iPhone XS on iOS13 and did see a 20% increase in performance compared to what we see in the graphs here; I’ll be updating those device’s scores as soon as I have more time.

The iPhone 11 Pros are doing much better than the regular iPhone 11 when it comes to the sustained performance results. I’m actually a bit surprised here given that these are the phones which have the SoC sandwiched between two stacked PCBs, but it seems Apple is able to cool off that whole assembly decently enough. The iPhone 11's scores here are a bit disappointing as it represents an almost 50% degradation in performance.

The new iPhones don’t score quite as well as some Snapdragon 855(+) devices, but this is rather because Apple does not allow the iPhones to get nearly as hot as some of these other devices. I wasn’t able to measure skin temperatures above 41°C on any of the new iPhones.

GFXBench Aztec Ruins - High - Vulkan/Metal - Off-screen

In the GFXBench Aztec High test, Apple’s microarchitecture is better able to flex its muscles and more clearly takes the lead in terms of both peak and sustained performance. Comparing the iPhone 11 Pro to the iPhone XS, we see a 23% increase in peak performance, and most importantly a much more impressive 50% increase in sustained performance.

GFXBench Aztec High Offscreen Power Efficiency
(System Active Power)
  Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
iPhone 11 Pro (A13) Warm 7FFP 26.14 3.83 6.82 fps/W
iPhone 11 Pro (A13) Cold / Peak 7FFP 34.00 6.21 5.47 fps/W
iPhone XS (A12) Warm 7FF 19.32 3.81 5.07 fps/W
iPhone XS (A12) Cold / Peak 7FF 26.59 5.56 4.78 fps/W
Galaxy 10+ (Snapdragon 855) 7FF 16.17 4.69 3.44 fps/W
Galaxy 10+ (Exynos 9820) 8LPP 15.59 4.80 3.24 fps/W

Measuring the power consumption, we again see that the A13 devices are extremely aggressive in their peak power, exceeding 6.2W. What is interesting here is even at this peak power-hungry performance state, the A13 is more efficient than the A12, and massively more efficient than the competition.

As usual, running a workload for a few minutes until the phone gets lukewarm (not to be mistaken with the longer sustained performance states in the benchmark graphs) will lower the performance and power to more reasonable levels. We’re able to make almost apples-to-apples comparisons here between the A13 and A12 iPhones: at roughly the same 3.8W power usage, the new A13 based device is able to showcase a 35% increase in performance. This performance state of the A13 actually corresponds to the peak performance of the A12, so that’s really nice as we’re able to do the same comparison but for the performance axis: At the same performance of the A12, the A13 is able to use 32% lower power. Not quite the 40% that Apple promised, but that could vary depending on workloads (Or it could be that Apple is quoting GPU power only, while we’re measuring whole system active power here).

GFXBench Aztec Ruins - Normal - Vulkan/Metal - Off-screen

GFXBench Aztec Normal Offscreen Power Efficiency
(System Active Power)
  Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
iPhone 11 Pro (A13) Warm 7FFP 73.27 4.07 18.00 fps/W
iPhone 11 Pro (A13) Cold / Peak 7FFP 91.62 6.08 15.06 fps/W
iPhone XS (A12) Warm 7FF 55.70 3.88 14.35 fps/W
iPhone XS (A12) Cold / Peak 7FF 76.00 5.59 13.59 fps/W
Galaxy 10+ (Snapdragon 855) 7FF 40.63 4.14 9.81 fps/W
Galaxy 10+ (Exynos 9820) 8LPP 40.18 4.62 8.69 fps/W

The “Normal” Aztec benchmark, which uses a lower resolution and has less workload complexity, actually fares even better for the iPhone 11s. Peak performance has improved by 21%. At roughly the same power, the A13 is 31% faster, while at almost the same performance, it’s again 32% more efficient.

GFXBench Manhattan 3.1 Off-screen

GFXBench Manhattan 3.1 Offscreen Power Efficiency
(System Active Power)
  Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
iPhone 11 Pro (A13) Warm 7FFP 100.58 4.21 23.89 fps/W
iPhone 11 Pro (A13) Cold / Peak 7FFP 123.54 6.04 20.45 fps/W
iPhone XS (A12) Warm 7FF 76.51 3.79 20.18 fps/W
iPhone XS (A12) Cold / Peak 7FF 103.83 5.98 17.36 fps/W
Galaxy 10+ (Snapdragon 855) 7FF 70.67 4.88 14.46 fps/W
Galaxy 10+ (Exynos 9820) 8LPP 68.87 5.10 13.48 fps/W
Galaxy S9+ (Snapdragon 845) 10LPP 61.16 5.01 11.99 fps/W
Huawei Mate 20 Pro (Kirin 980) 7FF 54.54 4.57 11.93 fps/W
Galaxy S9 (Exynos 9810) 10LPP 46.04 4.08 11.28 fps/W
Galaxy S8 (Snapdragon 835) 10LPE 38.90 3.79 10.26 fps/W
Galaxy S8 (Exynos 8895) 10LPE 42.49 7.35 5.78 fps/W

Manhattan 3.1 largely showcases similar results to the Aztec Normal scores.

GFXBench T-Rex 2.7 Off-screen

Finally, the older T-Rex benchmark has the new iPhone 11’s showcase significant improvements in terms of the sustained performance scores around 59% compared to last year’s XS devices.

GFXBench T-Rex Offscreen Power Efficiency
(System Active Power)
  Mfc. Process FPS Avg. Power
(W)
Perf/W
Efficiency
iPhone 11 Pro (A13) Warm 7FFP 289.03 4.78 60.46 fps/W
iPhone 11 Pro (A13) Cold / Peak 7FFP 328.90 5.93 55.46 fps/W
iPhone XS (A12) Warm 7FF 197.80 3.95 50.07 fps/W
iPhone XS (A12) Cold / Peak 7FF 271.86 6.10 44.56 fps/W
Galaxy 10+ (Snapdragon 855) 7FF 167.16 4.10 40.70 fps/W
Galaxy S9+ (Snapdragon 845) 10LPP 150.40 4.42 34.00 fps/W
Galaxy 10+ (Exynos 9820) 8LPP 166.00 4.96 33.40fps/W
Galaxy S9 (Exynos 9810) 10LPP 141.91 4.34 32.67 fps/W
Galaxy S8 (Snapdragon 835) 10LPE 108.20 3.45 31.31 fps/W
Huawei Mate 20 Pro (Kirin 980) 7FF 135.75 4.64 29.25 fps/W
Galaxy S8 (Exynos 8895) 10LPE 121.00 5.86 20.65 fps/W

We see the warmed up power draw for the phone here as being quite a bit higher than the other tests. It’s possible that the difference in here is the more CPU load due to the very high FPS figures we’re running the test at nowadays.

GPU Performance: Best In Class

Last year the A12 had some extremely impressive GPU improvements and it was the first time that Apple had been able to very clearly jump ahead of Qualcomm in terms of performance and efficiency. I didn't have as large expectations for the A13 this year as a follow-up, but Apple was very much able to impress and improve by greater margins than their marketing materials led me to believe.

First of all, the peak performance of the of the A13 is indeed improved by roughly ~20%. However this is not the metric that people should be paying most attention to. Apple’s sustained performance score improvements are a lot more significant and reach 50 to 60% when compared to last year’s iPhones. As things would seem, Apple’s claims to have improved thermal dissipation for the SoC have worked out extremely well.

The regular iPhone 11 does lag a bit behind the Pro models, as it seems it hasn’t been able to profit from the same design changes. Sustained performance here takes a little hit, but given the phone’s very low resolution I have to wonder if that really even matters in real workloads.

Most of all, Apple’s new GPU microarchitecture on the A13 is extremely impressive. Given the meager process node advancements, I had not expected the company to be able to push for such large performance and power efficiency gains. We’ll need to see some major paradigm shifts from the competition in order for them to be able to catch up in the next generation of devices.

Last year I did complain about the phones getting quite hot during the initial load periods at peak performance, and it looks like Apple has resolved this as I wasn’t able to measure skin temperatures above 41°C on any of the new phones. While I still question Apple’s need to drive the power draw near the limits of the power delivery of the phone, at least this time around it doesn’t create any negative drawback for the user experience.

System & ML Performance Display Measurement & Power
Comments Locked

242 Comments

View All Comments

  • Irish910 - Friday, October 18, 2019 - link

    Why so salty? If you hate Apple so much why are you here reading this article? Sounds like you’re insecure with your android phone which basically gets mopped up with by the new iPhones in every area where it counts. Shoo shoo now.
  • shompa - Thursday, October 17, 2019 - link

    Desktop performance. Do you understand the difference between CPU performance and App performance? X86 has never had the fastest CPUs. They had windows and was good enough / cheaper than RISC stuff. The reason why for example Adobe is "faster" in X86 is that Intel adds more and more specific instructions AVX/AVX512 to halt competition. Adobe/MSFT are lazy companies and don't recompile stuff for other architectures.
    For example when DVD encoding was invented in 2001 by Pioneer/Apple DVD-R. I bought a 10K PC with the fastest CPU there was. Graphics, SCSI disks and so on. Doing a MPEG 2 encoding took 15 hours. My first mac was a 667mhz PowerBook. The same encoding took 90 minutes. No. G4 was not 10 times faster, it was ALTIVEC that intel introduced as AVX when Apple switched to Intel. X86 dont even have real 64bit and therefore the 32bit parts in the CPU cant be removed. X86 is the only computer system where 64bit code runs slower than 32bit (about 3%). All other real 64bit systems gained 30-50% in speed. And its not about memory like PC clickers belive. Intel/ARM and others had 38bit memory addressing. That is 64gig / with a 4gig limit per app. Still, today: how many apps use more than 4gig memory? RISC went 64bit in 1990. Sun went 64bit / with 64bit OS in 1997. Apple went 64bit in 2002. Windows went 64bit after Playstation4/XboxOne started to release 64bit games.

    By controlling the OS and hardware companies can optimize OS and software. That is why Apple/Google and MSFT are starting to use own SoCs. And its better for customers. There are no reason a better X86 chip cost 400 dollars + motherboard tax 100 dollars. Intel 4 core CPUs 14nm cost less than 6 dollars to produce. The problem is customers: they are prepared to pay more for IntelInside and its based on the wrong notion "its faster". The faster MSFT moves to ARM / RISCV. The better. And if the rumors are right, Samsung is moving to RISCV. That would shake up the mobile market.
  • Quantumz0d - Thursday, October 17, 2019 - link

    Samsung just killed Texas team funding. And you don't want to pay for a socketed board and industry standard but rather have a surfacw which runs on an off the shelf processor and has small workload target in a PC ?

    Also dude from where you are pulling this $6 of Intel CPUs and I presume you already know how the R&D works right in Lithography ? ROI pays off once the momentum has began. So you are frustrated of 4C8T Intel monopoly amd want some magical unicorn out of thin air which is as fast that and is cheap and is portable a.k.a Soldered. Intel stagnated because of no competition. Now AMD came with better pricing and more bang for buck.

    Next from Bigroom Mainframes to pocket PC (unfortunate with iOS its not because of no Filesystem anf Google following same path of Scoped Storage) microsoft put computers in homes and now they recently started moving away into SaaS and DaaS bs. And now with thin client dream of yours Itll be detrimental to the Computer HW owners or who want to own.

    We do not want a Propreitary own walled gardens with orwellian drama like iOS. We need more Linux and more powerful and robust OS like Windows which handles customization despite getting sandbagged by M$ on removing control panel slowly and migrating away from Win32. Nobody wants that.

    https://www.computerworld.com/article/3444606/with...
  • jv007 - Wednesday, October 16, 2019 - link

    The lighting big cores are not very impressive this time.
    From 4 Watt to 5 Watt a 25% increase in power for 17% more performance.
    Good for benchmarks (and the phone was actively cooled here), but not good for throttling.
    7nm and no EUV, maybe next year with 5nm and EUV will improve seriously.
    I wonder if we will see a A13X.
  • name99 - Wednesday, October 16, 2019 - link

    "The lighting big cores are not very impressive this time"

    A PHONE core that matches the best Intel has to offer is "not impressive"?
    OK then...
  • Total Meltdowner - Thursday, October 17, 2019 - link

    Comparing this CPU to intel is silly. They run completely different instructions.
  • Quantumz0d - Sunday, October 20, 2019 - link

    It has been overblown. The Spec score is all the A series chips have. They can't replace x86 chips even Apple uses x86 cores with Linux RHEL or Free OS Linux distribution to run their services. Whole world runs on the same ISA. These people just whiteknight it like a breakthrough while the whole iOS lacks basic Filesystem access and the latest Catalina cannot run non notarized apps.

    Also to note the Apple First party premium optimization, that Apple pays for companies like Adobe. If you run MacOS / Trashbook Pro BGA / iOS on any non optimized SW it will be held back both on power consumption and all. It's just a glorified Nix OS and with the first party support it keeps floating. They missed out on the mass scale deployment like Windows or Linux and that's going to be their Achilles heel along with the more transformation of MacOS into iOS rather than opposite.

    It's really funny when you look how 60% of the performance is max that one can get from MacOS based HW/Intel machines due to severe thinning on chassis for that sweet BGA appeal and non user serviceable HW while claiming all recycled parts and all. I'm glad that Apple can't escape Physics. VRM throttling, low quality BGA engineering with cTDP garbage etc. Also people just blatantly forget how the DRAM of those x86 processors scales with more than 4000MHz of DDR4 and the PCIe lanes it pushes out with massive I/O while the anemic trash on Apple Macs is a USB C with Dongle world, ARM replicating the same esp the Wide A series with all the Uncore and PCIe I/O support ? Nope. It's not going to happen. Apple needs to invest Billions again and they are very conservative when it comes to this massive scale.

    Finally to note, ARM cannot replace x86. Period. The HPC/ DC market of the Chipzilla Intel and AMD, they won't allow for this BS, Also the ISA of x86 is so mature plus how the LGA and other sockets happen along. While ARM is stuck with BGA BS and thus they can never replace these in the Consumer market.

    Let the fanboys live in their dream utopia.
  • tipoo - Thursday, October 17, 2019 - link

    Being that the little cores are more efficient, and the battery is significantly larger, maybe they allowed a one time regresion in peak performance per watt to gain that extra performance, without a node shrink this year.
  • zeeBomb - Wednesday, October 16, 2019 - link

    the time has come.
  • joms_us - Wednesday, October 16, 2019 - link

    Show us that A13 can beat even the first gen Ryzen or Intel Skylake , run PCMark, Cinebench or any modern games otherwise this nonsense desktop level claim should go to the bin. You are using a primitive Spec app to demonstrate the IPC?

    I can't wait for Apple to ditch the Intel processor inside their MBP and replace with this SoC. Oh wait no, it won't happen in a decade because this cannot run a full pledge OS with real multi-tasking. =D

Log in

Don't have an account? Sign up now