The iPhone XS & XS Max Review: Unveiling the Silicon Secrets
by Andrei Frumusanu on October 5, 2018 8:00 AM EST- Posted in
- Mobile
- Apple
- Smartphones
- iPhone XS
- iPhone XS Max
The A12 Vortex CPU µarch
When talking about the Vortex microarchitecture, we first need to talk about exactly what kind of frequencies we’re seeing on Apple’s new SoC. Over the last few generations Apple has been steadily raising frequencies of its big cores, all while also raising the microarchitecture’s IPC. I did a quick test of the frequency behaviour of the A12 versus the A11, and came up with the following table:
Maximum Frequency vs Loaded Threads Per-Core Maximum MHz |
||||||
Apple A11 | 1 | 2 | 3 | 4 | 5 | 6 |
Big 1 | 2380 | 2325 | 2083 | 2083 | 2083 | 2083 |
Big 2 | 2325 | 2083 | 2083 | 2083 | 2083 | |
Little 1 | 1694 | 1587 | 1587 | 1587 | ||
Little 2 | 1587 | 1587 | 1587 | |||
Little 3 | 1587 | 1587 | ||||
Little 4 | 1587 | |||||
Apple A12 | 1 | 2 | 3 | 4 | 5 | 6 |
Big 1 | 2500 | 2380 | 2380 | 2380 | 2380 | 2380 |
Big 2 | 2380 | 2380 | 2380 | 2380 | 2380 | |
Little 1 | 1587 | 1562 | 1562 | 1538 | ||
Little 2 | 1562 | 1562 | 1538 | |||
Little 3 | 1562 | 1538 | ||||
Little 4 | 1538 |
Both the A11 and A12’s maximum frequency is actually a single-thread boost clock – 2380MHz for the A11’s Monsoon cores and 2500MHz for the new Vortex cores in the A12. This is just a 5% boost in frequency in ST applications. When adding a second big thread, both the A11 and A12 clock down to respectively 2325 and 2380MHz. It’s when we are also concurrently running threads onto the small cores that things between the two SoCs diverge: while the A11 further clocks down to 2083MHz, the A12 retains the same 2380 until it hits thermal limits and eventually throttles down.
On the small core side of things, the new Tempest cores are actually clocked more conservatively compared to the Mistral predecessors. When the system just had one small core running on the A11, this would boost up to 1694MHz. This behaviour is now gone on the A12, and the clock maximum clock is 1587MHz. The frequency further slightly reduces to down to 1538MHz when there’s four small cores fully loaded.
Much improved memory latency
As mentioned in the previous page, it’s evident that Apple has put a significant amount of work into the cache hierarchy as well as memory subsystem of the A12. Going back to a linear latency graph, we see the following behaviours for full random latencies, for both big and small cores:
The Vortex cores have only a 5% boost in frequency over the Monsoon cores, yet the absolute L2 memory latency has improved by 29% from ~11.5ns down to ~8.8ns. Meaning the new Vortex cores’ L2 cache now completes its operations in a significantly fewer number of cycles. On the Tempest side, the L2 cycle latency seems to have remained the same, but again there’s been a large change in terms of the L2 partitioning and power management, allowing access to a larger chunk of the physical L2.
I only had the test depth test up until 64MB and it’s evident that the latency curves don’t flatten out yet in this data set, but it’s visible that latency to DRAM has seen some improvements. The larger difference of the DRAM access of the Tempest cores could be explained by a raising of the maximum memory controller DVFS frequency when just small cores are active – their performance will look better when there’s also a big thread on the big cores running.
The system cache of the A12 has seen some dramatic changes in its behaviour. While bandwidth is this part of the cache hierarchy has seen a reduction compared to the A11, the latency has been much improved. One significant effect here which can be either attributed to the L2 prefetcher, or what I also see a possibility, prefetchers on the system cache side: The latency performance as well as the amount of streaming prefetchers has gone up.
Instruction throughput and latency
Backend Execution Throughput and Latency | ||||||||
Cortex-A75 | Cortex-A76 | Exynos-M3 | Monsoon | Vortex | |||||
Exec | Lat | Exec | Lat | Exec | Lat | Exec | Lat | |
Integer Arithmetic ADD |
2 | 1 | 3 | 1 | 4 | 1 | 6 | 1 |
Integer Multiply 32b MUL |
1 | 3 | 1 | 2 | 2 | 3 | 2 | 4 |
Integer Multiply 64b MUL |
1 | 3 | 1 | 2 | 1 (2x 0.5) |
4 | 2 | 4 |
Integer Division 32b SDIV |
0.25 | 12 | 0.2 | < 12 | 1/12 - 1 | < 12 | 0.2 | 10 | 8 |
Integer Division 64b SDIV |
0.25 | 12 | 0.2 | < 12 | 1/21 - 1 | < 21 | 0.2 | 10 | 8 |
Move MOV |
2 | 1 | 3 | 1 | 3 | 1 | 3 | 1 |
Shift ops LSL |
2 | 1 | 3 | 1 | 3 | 1 | 6 | 1 |
Load instructions | 2 | 4 | 2 | 4 | 2 | 4 | 2 | |
Store instructions | 2 | 1 | 2 | 1 | 1 | 1 | 2 | |
FP Arithmetic FADD |
2 | 3 | 2 | 2 | 3 | 2 | 3 | 3 |
FP Multiply FMUL |
2 | 3 | 2 | 3 | 3 | 4 | 3 | 4 |
Multiply Accumulate MLA |
2 | 5 | 2 | 4 | 3 | 4 | 3 | 4 |
FP Division (S-form) | 0.2-0.33 | 6-10 | 0.66 | 7 | >0.16 | 12 | 0.5 | 1 | 10 | 8 |
FP Load | 2 | 5 | 2 | 5 | 2 | 5 | ||
FP Store | 2 | 1-N | 2 | 2 | 2 | 1 | ||
Vector Arithmetic | 2 | 3 | 2 | 2 | 3 | 1 | 3 | 2 |
Vector Multiply | 1 | 4 | 1 | 4 | 1 | 3 | 3 | 3 |
Vector Multiply Accumulate | 1 | 4 | 1 | 4 | 1 | 3 | 3 | 3 |
Vector FP Arithmetic | 2 | 3 | 2 | 2 | 3 | 2 | 3 | 3 |
Vector FP Multiply | 2 | 3 | 2 | 3 | 1 | 3 | 3 | 4 |
Vector Chained MAC (VMLA) |
2 | 6 | 2 | 5 | 3 | 5 | 3 | 3 |
Vector FP Fused MAC (VFMA) |
2 | 5 | 2 | 4 | 3 | 4 | 3 | 3 |
To compare the backend characteristics of Vortex, we’ve tested the instruction throughput. The backend performance is determined by the amount of execution units and the latency is dictated by the quality of their design.
The Vortex core looks pretty much the same as the predecessor Monsoon (A11) – with the exception that we’re seemingly looking at new division units, as the execution latency has seen a shaving of 2 cycles both on the integer and FP side. On the FP side the division throughput has seen a doubling.
Monsoon (A11) was a major microarchitectural update in terms of the mid-core and backend. It’s there that Apple had shifted the microarchitecture in Hurricane (A10) from a 6-wide decode from to a 7-wide decode. The most significant change in the backend here was the addition of two integer ALU units, upping them from 4 to 6 units.
Monsoon (A11) and Vortex (A12) are extremely wide machines – with 6 integer execution pipelines among which two are complex units, two load/store units, two branch ports, and three FP/vector pipelines this gives an estimated 13 execution ports, far wider than Arm’s upcoming Cortex A76 and also wider than Samsung’s M3. In fact, assuming we're not looking at an atypical shared port situation, Apple’s microarchitecture seems to far surpass anything else in terms of width, including desktop CPUs.
253 Comments
View All Comments
Constructor - Monday, October 8, 2018 - link
Sure you can – just go to airplane mode and re-enable WiFi.s.yu - Monday, October 8, 2018 - link
In the third and fifth low light sample I think it's pretty clear that the XS beat the P20P, again P20P's retention of texture is horrible and only in extreme low light (second and last sample) and scenes with many man-made objects which lack obvious texture (third sample) does its excessive sharpening and NR have a advantage. The first sample is a surprise, there either the scene is actually darker than I'm lead to believe or there's something else I've not taken into account.daiquiri - Monday, October 8, 2018 - link
Isn't this a too bold affirmation that A12 is on par with current desktop CPUs? So why didn't we stack them in order to have 5x the processing power of a common laptop/workstation for 1/2 of the power? What am I missing here? Because this doesn't make much sense to me.If I have 2 or 3 of these SoC chips and photoshop running on them will I have my filters running faster? Chrome loading webpages faster? Creating zip archives or converting flac files 3x times faster than an Intel I7?
resiroth - Monday, October 8, 2018 - link
There is widespread speculation (and at this point, it is considered more likely than not) that apple will transition to their own chips exclusively for their Mac products in the near future (within 2 years).What exactly doesn’t make sense to you?
Ps: will be very interesting to see results of new iPad Pro this year too.
daiquiri - Monday, October 8, 2018 - link
What is strange to me is why aren't we already stacking this SoCs. I suppose I can fit 6 or 8 of these processors on the same die size. This means I would have a lot faster PC for less power? It they are that great why aren't on desktops already?Does this mean if I manage to install Windows 10 ARM edition on my PC photoshop will run faster and Chrome will load pages two or three times faster since I have 8 of these SoCs on my desktop pc?
Constructor - Wednesday, October 10, 2018 - link
Because just increasing the core count is not quite enough.As I have explained above, a platform transition is a massive undertaking which among many other things needs actually superior performance to afford the initially still necessary legacy code emulation.
This will almost certainly start at a WWDC (next year or the year after that, most likely) with an announcement about not just coming hardware but also about those new mechanisms in macOS itself, and actual Axx-powered Macs may take months after that to emerge while developers start porting their code to the new platform.
It's not as if this was the first time for Apple – they've done it twice already!
varase - Tuesday, October 23, 2018 - link
Yeah, but they always went to a *much* faster CPU to do the emulation.Constructor - Wednesday, October 24, 2018 - link
Even just the iPhone cores right now are already at about a desktop i5 level according to GeekBench.There should be quite some room for upscaling with more power and better cooling.
tipoo - Wednesday, October 24, 2018 - link
These cores are 30% larger than Intels, let that sink in.I'm sure 8 of them would perform marvellously, for the cost. And it may be coming.
Zoolook - Tuesday, October 9, 2018 - link
I just pulled an old test of a i3-6320, a 3 year old dual-core, and the A12 does good on some of the tests, but in many tests it's quite a bit behind, so it's not ready for server-duty yet.I know that the i3 has higher frequencies but it's also only two cores, compared to 2+4.
There is no question that the big cores are very efficient, but they are optimized for their current job.
If Apple would build a 6- or 8-core processor based on their large cores and throw away most of the other SoC and add other parts (stronger SIMD etc), yes then we might have a good desktop/serverchip, but the A12 is not that chip.
Also remember that we are comparing 14nm Intel chips with the newest "7nm" process, if Intel ever gets their 10nm process up and running for real, then we could do a better comparison.