Thoughts and Comparisons

Throughout AMD's road to releasing details on Zen, we have had a chance to examine the information on the microarchitecture often earlier than we had expected to each point in the Zen design/launch cycle. Part of this is due to the fact that internally, AMD is very proud of their design, but some extra details (such as the extent of XFR, or the size of the micro-op cache), AMD has held close to its chest until the actual launch. With the data we have at hand, we can fill out a lot of information for a direct comparison chart to AMD’s last product and Intel’s current offerings.

CPU uArch Comparison
  AMD Intel
  Zen
8C/16T
2017
Bulldozer
4M / 8T
2010
Skylake
Kaby Lake
4C / 8T
2015/7
Broadwell
8C / 16T
2014
L1-I Size 64KB/core 64KB/module 32KB/core 32KB/core
L1-I Assoc 4-way 2-way 8-way 8-way
L1-D Size 32KB/core 16KB/thread 32KB/core 32KB/core
L1-D Assoc 8-way 4-way 8-way 8-way
L2 Size 512KB/core 1MB/thread 256KB/core 256KB/core
L2 Assoc 8-way 16-way 4-way 8-way
L3 Size 2MB/core 1MB/thread >2MB/cire 1.5-3MB/core
L3 Assoc 16-way 64-way 16-way 16/20-way
L3 Type Victim Victim Write-back Write-back
L0 ITLB Entry 8 - - -
L0 ITLB Assoc ? - - -
L1 ITLB Entry 64 72 128 128
L1 ITLB Assoc ? Full 8-way 4-way
L2 ITLB Entry 512 512 1536 1536
L2 ITLB Assoc ? 4-way 12-way 4-way
L1 DTLB Entry 64 32 64 64
L1 DTLB Assoc ? Full 4-way 4-way
L2 DTLB Entry 1536 1024 - -
L2 DTLB Assoc ? 8-way - -
Decode 4 uops/cycle 4 Mops/cycle 5 uops/cycle 4 uops/cycle
uOp Cache Size 2048 - 1536 1536
uOp Cache Assoc ? - 8-way 8-way
uOp Queue Size ? - 128 64
Dispatch / cycle 6 uops/cycle 4 Mops/cycle 6 uops/cycle 4 uops/cycle
INT Registers 168 160 180 168
FP Registers 160 96 168 168
Retire Queue 192 128 224 192
Retire Rate 8/cycle 4/cycle 8/cycle 4/cycle
Load Queue 72 40 72 72
Store Queue 44 24 56 42
ALU 4 2 4 4
AGU 2 2 2+2 2+2
FMAC 2x128-bit 2x128-bit
2x MMX 128-bit
2x256-bit 2x256-bit

Bulldozer uses AMD-coined macro-ops, or Mops, which are internal fixed length instructions and can account for 3 smaller ops. These AMD Mops are different to Intel's 'macro-ops', which are variable length and different to Intel's 'micro-ops', which are simpler and fixed-length.

Excavator has a number of improvements over Bulldozer, such as a larger L1-D cache and a 768-entry L1 BTB size, however we were never given a full run-down of the core in a similar fashion and no high-end desktop version of Excavator will be made.

This isn’t an exhaustive list of all features (thanks to CPU WorldReal World Tech and WikiChip for filling in some blanks) by any means, and doesn’t paint the whole story. For example, on the power side of the equation, AMD is stating that it has the ability to clock gate parts of the core and CCX that are not required to save power, and the L3 runs on its own clock domain shared across the cores. Or the latency to run certain operations, which is critical for workflow if a MUL operation takes 3, 4 or 5 cycles to complete. We have been told that the FPU load is two cycles quicker, which is something. The latency in the caches is also going to feature heavily in performance, and all we are told at this point is that L2 and L3 are lower latency than previous designs.

A number of these features we’ve already seen on Intel x86 CPUs, such as move elimination to reduce power, or the micro-op cache. The micro-op cache is a piece of the puzzle we wanted to know more about from day one, especially the rate at which we get cache hits for a given workload. Also, the use of new instructions will adjust a number of workloads that rely on them. Some users will lament the lack of true single-instruction AVX-2 support, however I suspect AMD would argue that the die area cost might be excessive at this time. That’s not to say AMD won’t support it in the future – we were told quite clearly that there were a number of features originally listed internally for Zen which didn’t make it, either due to time constraints or a lack of transistors.

We are told that AMD has a clear internal roadmap for CPU microarchitecture design over the next few generations. As long as we don’t stay for so long on 14nm similar to what we did at 28/32nm, with IO updates over the coming years, a competitive clock-for-clock product (even to Broadwell) with good efficiency will be a welcome return.

Power, Performance, and Pre-Fetch: AMD SenseMI Chipsets and Motherboards
Comments Locked

574 Comments

View All Comments

  • nt300 - Saturday, March 11, 2017 - link

    If AMD hadn't gone with GF's 14nm process, ZEN would probably have been delayed. I think as soon as Ryzen Optimizations come out, these chips will further outperform.
  • MongGrel - Thursday, March 9, 2017 - link


    For some reason making a casual comment about anything bad about the chip will get you banned at the drop of a hat on the tech forums, and then if you call him out they will ban you more.

    https://arstechnica.com/gadgets/2017/03/amds-momen...

  • MongGrel - Thursday, March 9, 2017 - link

    For some reason, MarkFW seems to thinks he is the reincarnation of Kyle Bennet, and whines a lot before retreating to his safe space.
  • nt300 - Saturday, March 11, 2017 - link

    I've noticed in the past that AMD has an issue with increasing L3 cache speed and/or Latencies. Hopefully they start tightening the L3 as much as possible. Can Anandtech do a comparison between Ryzen before Optimizations and after Optimizations. Ty
  • alpha754293 - Friday, March 17, 2017 - link

    Looks like that for a lot of the compute-intensive benchmarks, the new Ryzen isn't that much better than say a Core i5-7700K.

    That's quite a bit disappointing.

    AMD needs to up their FLOPS/cycle game in order to be able to compete in that space.

    Such a pity because the original Opterons were a great value proposition vs. the Intels. Now, it doesn't even come close.
  • deltaFx2 - Saturday, March 25, 2017 - link

    @Ian Cutress: When you do test gaming, if you can, I'd love to have the hypothesis behind the 'generally accepted methodology' tested out. The methodology being, to test it at lowest resolution. The hypothesis is that this stresses the CPU, and that a future, higher performance GPU will be bottlenecked by the slower CPU. Sounds logical, but is it?

    Here's the thing: Typically, when given more computing resources, people scale up their problem to utilize those resources. In other words, if I give you a more powerful GPU, games will scale up their perf requirements to match it, by doing stuff that were not possible/practical in earlier GPUs. Today's games are far more 'realistic' and are played at much higher resolutions than say 5 years ago. In which case, the GPU is always the limiting factor no matter what (unless one insists on playing 5 year old games on the biggest, baddest GPU). And I fully expect that the games of today are built to max out current GPUs, so hardware lags software.

    This has parallels with what happens in HPC: when you get more compute nodes for HPC problems, people scale up the complexity of their simulations rather than running the old, simplified simulations. Amdahl's law is still not a limiting factor for HPC, and we seem to be talking about Exascale machines now. Clearly, there's life in HPC beyond what a myopic view through the Amdahl law lens would indicate.

    Just a thought :) Clearly, core count requirements have gone up over the last decade, but is it true that a 4c/8t sandy bridge paired up with Nvidia's latest and greatest is CPU-bottlenecked at likely resolutions?
  • wavelength - Friday, March 31, 2017 - link

    I would love to see Anand test against AdoredTV's most recent findings on Ryzen https://www.youtube.com/watch?v=0tfTZjugDeg
  • LawJikal - Friday, April 21, 2017 - link

    What I'm surprised to see missing... in virtually all reviews across the web... is any discussion (by a publication or its readers) on the AM4 platform's longevity and upgradability (in addition to its cost, which is readily discussed).

    Any Intel Platform - is almost guaranteed to not accommodate a new or significantly revised microarchitecture... beyond the mere "tick". In order to enjoy a "tock", one MUST purchase a new motherboard (if historical precedent is maintained).

    AMD AM4 Platform - is almost guaranteed to, AT LEAST, accommodate Ryzen "II" and quite possibly Ryzen "III" processors. And, in such cases, only a new processor and BIOS update will be necessary to do so.

    This is not an insignificant point of differentiation.
  • PeterCordes - Monday, June 5, 2017 - link

    The uArch comparison table has some errors for the Intel columns. Dispatch/cycle: Skylake can read 6 uops per clock from the uop cache into the issue queue, but the issue stage itself is still only 4 uops wide. You've labelled Even running from the loop buffer (LSD), it can only sustain a throughput of 4 uops per clock, same 4-wide pipeline width it has been since Core2. (pre-Haswell it has to be a mix of ALU and some store or load to sustain that throughput without bottlenecking on the execution ports.) Skylake's improved decode and uop-cache bandwidth lets it refill the uop queue (IDQ) after bubbles in earlier stages, keeping the issue stage fed (since the back-end is often able to actually keep up).

    Ryzen is 6-wide, but I think I've read that it can only issue 6 uops per clock if some of them are from "double instructions". e.g. 256-bit AVX like VADDPS ymm0, ymm1, ymm2 that decodes to two separate 128-bit uops. Running code with only single-uop instructions, the Ryzen's front-end throughput is 5 uops per clock.

    In Intel terminology, "dispatch" is when the scheduler (aka Reservation Station) sends uops to the execution units. The row you've labelled "dispatch / cycle" is clearly the throughput for issuing uops from the front-end into the out-of-order core, though. (Putting them into the ROB and Reservation Station). Some computer-architecture people call that "dispatch", but it's probably not a good idea in an x86 context. (Unless AMD uses that terminology; I'm mostly familiar with Intel).

    ----

    You list the uop queue size at 128 for Skylake. This is bogus. It's always 64 per thread, with or without hyperthreading. Intel has alternated in SnB/IvB/HSW/SKL between this and letting one thread use both queues as a single big queue. HSW/BDW statically partition their 56-entry queue into two 28-entry halves when two threads are active, otherwise it's a 56-entry queue. (Not 64). Agner Fog's microarch pdf and Intel's optmization manual both confirm this (in Section 2.1.1 about Skylake's front-end improvements over previous generations).

    Also, the 4-uop per clock issue width is 4 fused-domain uops, so I was able to construct a loop that runs 7 unfused-domain uops per clock (http://www.agner.org/optimize/blog/read.php?i=415#... with 2 micro-fused ALU+load, one micro-fused store, and a dec/branch. AMD doesn't talk about "unfused" uops because it doesn't use a unified scheduler, IIRC, so memory source operands always stay with the ALU uop.

    Also, you mentioned it in the text, but the L1d change from write-through to write-back is worth a table row. IIRC, Bulldozer's L1d write-back has a small buffer or something to absorb repeated writes of the same lines, so it's not quite as bad as a classic write-through cache would be for L2 speed/power requirements, but Ryzen is still a big improvement.

Log in

Don't have an account? Sign up now