Cortex A76 µarch - Backend

Switching to the back-end of the core we have a look at the execution core. 

The integer core contains 6 issue queues and execution ports (4 depicted in the slide plus 2 load/store pipelines). There are 3 integer execution pipelines – two ALUs capable of simple arithmetic operations and a complex pipeline handling also multiplication, division and CRC ops. The three integer pipelines are served by 16 deep issue queues. The same size issue queue can also be found serving the single branch execution port.

Two load/store units are the remaining ports of the integer core and are each served by two 12 deep issue queues. The issue queue stages are 3 cycles deep and while I mentioned that the rename/dispatch is 1 stage deep, the dispatch stage actually overlaps with the first cycle of the issue queues stages.

The ASIMD/floating point core contains two pipelines which are served by two 16-deep issue queues.

When it comes to the backend of a CPU core the two most important metrics are instruction throughput and latency. Where the A76 in particular improves a lot is in terms of instruction latency as it’s able to shave off cycles on very important instructions.

To better overview the improvements I created a table with the most common instruction types. The execution throughput and latencies presented here are for AArch64 instructions and if not otherwise noted represent operations on 64-bit data for integer and 64bit (double precision) FP.

Backend Execution Throughput and Latency
  Cortex-A75 Cortex-A76 Exynos-M3
  Exec Lat Exec Lat Exec Lat
Integer Arithmetic (Add, sub) 2 1 3 1 4 1
Integer Multiply 32b 1 3 1 2 2 3
Integer Multiply 64b 1 3 1 2 1
(2x 0.5)
4
Integer Multiply Accumulate 1 3 1 2 1 3
Integer Division 32b 0.25 12 0.2 < 12 1/12 - 1 < 12
Integer Division 64b 0.25 12 0.2 < 12 1/21 - 1 < 21
Move  (Mov) 2 1 3 1 3 1
Shift ops (Lsl) 2 1 3 1 3 1
Load instructions 2 4 2 4 2 4
Store instructions 2 1 2 1 1 1
FP Arithmetic 2 3 2 2 3 2
FP Multiply 2 3 2 3 3 4
FP Multiply Accumulate 2 5 2 4 3 4
FP Division (S-form) 0.2-0.33 6-10 0.66 7 >0.16
(2x 1+/12)
12
FP Load 2 5 2 5 2 5
FP Store 2 1-N 2 2 2 1
ASIMD Arithmetic 2 3 2 2 3 1
ASIMD Multiply 1 4 1 4 1 3
ASIMD Multiply Accumulate 1 4 1 4 1 3
ASIMD FP Arithmetic 2 3 2 2 3 2
ASIMD FP Multiply 2 3 2 3 1 3
ASIMD FP Chained MAC (VMLA) 2 6 2 5 3 5
ASIMD FP Fused MAC (VFMA) 2 5 2 4 3 4

On the integer operations side the A76 improves the multiplication and multiply accumulate latencies from 3 cycles down to 2 cycles, with the throughput remaining the same when compared to the A75. Obviously because the A76 has 3 integer pipelines simple arithmetic operations see a 50% increase in throughput versus the A75’s 2 pipelines.

The much larger and important improvements can be found in the “VX” (vector execution) pipelines which are in charge of FP and ASIMD operations. Arm calls the new pipeline a “state-of-the-art” design and this is finally the result that’s been hyped up for several years now.

Floating point arithmetic operations have been reduced in latency from 3 cycles down to 2 cycles, and multiply accumulate has also shaved off a cycle from 5 cycles down to 4.

What Arm means by the “Dual 128bit ASIMD” with doubled execution bandwidth is that for the A75 and prior only one of the vector pipelines was capable of 128bit while the other one was still 64-bit. For the A76 both vector pipelines are 128-bit now so quad-precision operations see a doubling of the execution throughput.

Moving onto more details of the data handling side, we see the again the two load/store pipelines which was something first implemented on the A73 and A75. Although depicted as one issue queue in the slide, the LD/S pipelines each have their own queues at 16 entries deep.

The data cache is fixed at 64KB and is 4-way associative. Load latency remains at 4 cycles. The DTLBs run a separate pipeline as tag and data lookup. Arm’s goals here is aiming for maximum MLP/ memory level parallelism to be able to feed the core.

In a perfect machine everything would be already located in the caches, so it’s important to have very robust prefetching capabilities. On the A76 we see a new 4th generation prefetchers introduced to get nearer to this goal of perfect cache-hit operation. In all the A76 has 4 different prefetching engines running in parallel looking at various data patterns and loading data into the caches.

In terms of the A76 cache hierarchy Arm is said to have made no compromises and got the best of both worlds in terms of bandwidth and latency. The 64KB L1 instruction cache reads up to 32B/cycle and the same bandwidth applies to the L1 data cache in both directions. The L1 is a writeback cache. The L2 cache is configurable in 256 or 512KB sizes and is D-side inclusive with the same 2x 32B/cycle write and read interfaces up to the exclusive L3 cache in the 2nd generation DSU.

Overall the microarchitectural improvements on the core are said to improve memory bandwidth to DRAM by up to 90% in microbenchmarks.

All in all the microarchitecture of the A76 could be summed up in a few focus design points: Maximise memory performance throughout the core by looking at every single cycle. During the design phase the engineers were looking at feature changes with a sensitivity of up to 0.25% in performance or power – if that metric was fulfilled then it was deemed to be a worthwhile change in the core. Small percentages then in turn add up to create significant figures in the end product.

The focus on bandwidth on latency is said to have been extreme, and Arm was very adamant in re-iterating that to be able to take full advantage of the microarchitecture that vendors need to implement an equally capable memory subsystem on the SoC to see full advantages. A figure that was put out there was 0.25% of performance per nanosecond of latency to main memory. As we’ve seen in the Snapdragon 845 one of the reasons the SoC didn’t quite reach Arm’s projected performance metrics was the degraded memory latency figures which might have been introduced by the L4 system cache in the SoC. In the future vendors will need to focus more on providing latency sensitive memory subsystems as otherwise they’ll be letting free performance and power on the table with differences that could amount to basically a generational difference in CPU IP.

Cortex A76 µarch - Frontend Cortex A76 - Performance & Power Projections
Comments Locked

123 Comments

View All Comments

  • techconc - Tuesday, June 5, 2018 - link

    " A11 is no match in speed and performance versus SD845 and Exynos powered Android phones today"

    Huh? Benchmarks do not support your claim.
  • name99 - Friday, June 1, 2018 - link

    This assumption makes two mistakes.

    The first is to assume that ONE metric (in this case 4-wide front end) is the PRIMARY determinant of performance. Even Apple's (A11) IPC (over a wide range of code) is about maybe 2.7. This means on average less than 3 of those 6 execution units are being used per cycle. IF other parts of the core uncache could be PERFECTED on a 4 wide design so that EVERY cycle 4 instructions executed, it would clearly surpass the A11 in IPC.
    The problem, of course, is just how hard it is to prevent cycles where NOTHING executes and cycles where only a few (one or two) instructions execute. Reducing these are where most of the magic is --- and you won't see details of that it in an article like this; rather it's in that painstaking rooting out hundreds of small inefficiencies that the article talked about.

    To give just one example -- no-one is talking about the clustered page tables. This is a very cool idea which relies on the fact that most of the time the OS page allocator allocates a number of pages contiguously in virtual AND physical space, and with the same permissions. If that is so, the same page entry can correspond to multiple contiguous pages (in the academic literature, usually up to 8). This gives you a substantial increase in TLB reach at a very minor increase in TLB bits.
    (I can find no info as to whether Intel does this. I SUSPECT Apple used to do this in their earlier [and probably even A11] cores. There are very recent even better ideas in the academic literature that might, perhaps, have made it to the A12 core.)

    Second mistake you make is to ignore frequency. A9 ran at 1.85 GHz, A10 at 2.35 GHz. The A76 will likely run at 3GHz.
  • tipoo - Tuesday, September 4, 2018 - link

    And yet, here we are with single core results at around 60% that of the A11. Taking their own numbers at face value, a 56% increase over the A73 in GB4 results in 2800.

    Yes, I used a very simplistic one dimensional comparison, and there's a whole lot more to it. However, core complexity does go up almost exponentially with width, and so it does point to what ballpark they were aiming at. A76 was never going to beat the A11 per core because it was never aimed at it.
  • colinisation - Thursday, May 31, 2018 - link

    Hi Andrei,

    Is this core the one referred to as Ares on roadmaps?

    Been waiting years for this one if it is.
  • Andrei Frumusanu - Thursday, May 31, 2018 - link

    Yes in practical terms - no in actual terms. You'll likely hear more about this in the future.
  • tuxRoller - Friday, June 1, 2018 - link

    Ok, now you've incepted the idea that ARM is going to announce a dedicated server-class chip (maybe even a tease of SVE.....)
  • name99 - Friday, June 1, 2018 - link

    I agree with your point (ARM will release a server chip, essentially based on this core).
    Remember that GB4 is scaled to 4000 represents an i7-6600U (Skylake, 3.4GHz, 4MiB L3, 15W).
    So A76 is essentially at that level (slightly worse FP, but many server tasks will not care).
    To the extent that that Skylake at 3.4GHz is an acceptable Server class core, ARM could dump some large number of A76 on a die and be in the same sort of space as dearly-departed Centriq and ThunderX2.

    They likely would have to beef up their NoC one way or another, and tweak the caching and memory systems, the usual server additions.
    But I assume they didn't put all that effort into "lowest possible latency for hypervisor activity" on the theory that hypervisor performance on smartphones is THE next big thing...
  • joe_85 - Thursday, May 31, 2018 - link

    I am sure people will disagree with me because people love to argue. Andrei, I like your writing and overall thoroughness but a few critiques here. The charts you make are extremely unpleasant to look at and do not lend themselves to a quick assessment of the data.

    First of all the color coded stripes in the legend for the A76 projections is not even decipherable, and the actual bars are on the chart are difficult to see. Secondly, why are you color coding them at all? Just put processor names to the left of the bars and the benchmark name above the bars.

    Additionally are the bars in any particular order? If they are I certainly can't tell, they should be relative to the performance OR the efficiency.

    Other constructive criticism would be that adding some additional subheadings within your articles would make it feel like a more solid piece.

    Keep up the good work.
  • jospoortvliet - Wednesday, June 6, 2018 - link

    Loving to argue or not, the performance vs efficiency graphs are rather unique and very clever, I have not seen any design that so clearly shows how different SOC's compare at both. Yes it takes a few mins before you can read them I am sorry that the world is so complicated. But they work very well if you just use that gray matter a bit.
  • syxbit - Thursday, May 31, 2018 - link

    As an Android user, I continue to be disappointed with QCOMM, Arm (and Nvidia for dropping out) at how far ahead Apple in single threaded perf.

Log in

Don't have an account? Sign up now