Some Final Thoughts and Comparisons

With the Hot Chips presentation we’ve been given more information on the Zen core microarchitecture than we expected to have at this point in the Zen design/launch cycle. AMD has already stated that general availability for Zen will be in Q1, and Zen might not be the final product launch name/brand when it comes to market. However, there are still plenty of gaps in our knowledge for the hardware, and AMD has promised to reveal this information as we get closer to launch.

We discussed in our earlier piece on the Zen performance metrics given mid-week that it can be hard to interpret any anecdotal benchmark data at this point when there is so much we don’t know (or can’t confirm). With the data in this talk at Hot Chips, we can fill out a lot of information for a direct comparison chart to AMD’s last product and Intel’s current offerings.

CPU uArch Comparison
  AMD Intel
  Zen
8C/16T
2017
Bulldozer
4M / 8T
2010
Skylake
4C / 8T
2015
Broadwell
8C / 16T
2014
L1-I Size 64KB/core 64KB/module 32KB/core 32KB/core
L1-I Assoc 4-way 2-way 8-way 8-way
L1-D Size 32KB/core 16KB/thread 32KB/core 32KB/core
L1-D Assoc 8-way 4-way 8-way 8-way
L2 Size 512KB/core 1MB/thread 256KB/core 256KB/core
L2 Assoc 8-way 16-way 4-way 8-way
L3 Size 2MB/core 1MB/thread >2MB/cire 1.5-3MB/core
L3 Assoc 16-way 64-way 16-way 16/20-way
L3 Type Victim Victim Write-back Write-back
L0 ITLB Entry 8 - - -
L0 ITLB Assoc ? - - -
L1 ITLB Entry 64 72 128 128
L1 ITLB Assoc ? Full 8-way 4-way
L2 ITLB Entry 512 512 1536 1536
L2 ITLB Assoc ? 4-way 12-way 4-way
L1 DTLB Entry 64 32 64 64
L1 DTLB Assoc ? Full 4-way 4-way
L2 DTLB Entry 1536 1024 - -
L2 DTLB Assoc ? 8-way - -
Decode 4 uops/cycle 4 Mops/cycle 5 uops/cycle 4 uops/cycle
uOp Cache Size ? - 1536 1536
uOp Cache Assoc ? - 8-way 8-way
uOp Queue Size ? - 128 64
Dispatch / cycle 6 uops/cycle 4 Mops/cycle 6 uops/cycle 4 uops/cycle
INT Registers 168 160 180 168
FP Registers 160 96 168 168
Retire Queue 192 128 224 192
Retire Rate 8/cycle 4/cycle 8/cycle 4/cycle
Load Queue 72 40 72 72
Store Queue 44 24 56 42
ALU 4 2 4 4
AGU 2 2 2+2 2+2
FMAC 2x128-bit 2x128-bit
2x MMX 128-bit
2x256-bit 2x256-bit

Bulldozer uses AMD-coined macro-ops, or Mops, which are internal fixed length instructions and can account for 3 smaller ops. These AMD Mops are different to Intel's 'macro-ops', which are variable length and different to Intel's 'micro-ops', which are simpler and fixed-length.

Excavator has a number of improvements over Bulldozer, such as a larger L1-D cache and a 768-entry L1 BTB size, however we were never given a full run-down of the core in a similar fashion and no high-end desktop version of Excavator will be made.

This isn’t an exhaustive list of all features (thanks to CPU World, Real World Tech and WikiChip for filling in some blanks) by any means, and doesn’t paint the whole story. For example, on the power side of the equation, AMD is stating that it has the ability to clock gate parts of the core and CCX that are not required to save power, and the L3 runs on its own clock domain shared across the cores. Or the latency to run certain operations, which is critical for workflow if a MUL operation takes 3, 4 or 5 cycles to complete. We have been told that the FPU load is two cycles quicker, which is something. The latency in the caches is also going to feature heavily in performance, and all we are told at this point is that L2 and L3 are lower latency than previous designs.

A number of these features we’ve already seen on Intel x86 CPUs, such as move elimination to reduce power, or the micro-op cache. The micro-op cache is a piece of the puzzle we want to know more about, especially the rate at which we get cache hits for a given workload. Also, the use of new instructions will adjust a number of workloads that rely on them. Some users will lament the lack of true single-instruction AVX-2 support, however I suspect AMD would argue that the die area cost might be excessive at this time. That’s not to say AMD won’t support it in the future – we were told quite clearly that there were a number of features originally listed internally for Zen which didn’t make it, either due to time constraints or a lack of transistors.

We are told that AMD has a clear internal roadmap for CPU microarchitecture design over the next few generations. As long as we don’t stay for so long on 14nm similar to what we did at 28/32nm, with IO updates over the coming years, a competitive clock-for-clock product (even to Broadwell) with good efficiency will be a welcome return.

Simultaneous MultiThreading (SMT) and New Instructions Appendix: Zen Slides from Hot Chips Presentation
Comments Locked

106 Comments

View All Comments

  • bcronce - Tuesday, August 23, 2016 - link

    Exclusive L3 cache makes better use of space, but requires snooping other core's L2 caches for data. If the L3 cache has all of the data all of the L2 cache has, then you only need to check one place.

    This is important when you're trying to synchronize threads since locks are shared memory locations that each core is attempting to read and update. Common types of thread safe data-structures can take some pretty big performance scaling hits. Of course you can work around this in your data-structure.

    One research paper that I read showed exclusive caches having twice the latency of inclusive when snooping was required. If your data-structure has a scaling that works well up to 16 cores on Intel's inclusive cache, it may cap out around 8 cores on AMD's exclusive, thanks to Amdahl's law.

    Cache snooping gets slower as more cores are added. Gotta check them all.
  • deltaFx2 - Tuesday, August 23, 2016 - link

    @bconce: Except that Intel doesn't do strictly inclusive caches either. Intel's caches are neither-inclusive-nor-exclusive (afaik), in which data is inserted into both L2 and L3, but evicted independently. So you have to check L2 and L3 independently, same as the exclusive cache. Strictly-inclusive caches have many bad properties, a few that come to mind immediately (1) False evictions of lines: If a block constantly hits in L2, the LRU in L3 is not updated. If the block then becomes the oldest in L3 and is evicted, it must be evicted in L2 as well, resulting in a miss all the way to memory (2) Associativity of the L3 cache must be at least the sum of the associativity of the L2 caches hanging off it, otherwise it will constrain the associativity of the L2 caches. Hence neither-inclusive-nor-exclusive, or strictly exclusive.

    Exclusive caches are harder to build, true, because you have to manage exclusivity. That doesn't explain Ian's comment about them being less efficient.
  • 68k - Wednesday, August 24, 2016 - link

    The Intel manual state that

    "The shared L3 cache is writeback and inclusive, such that a cache line that exists in either L1 data cache, L1 instruction cache, unified L2 cache also exists in L3."

    That is, the L3-cache is strictly inclusive with anything stored in the core local L2/L1-caches. So it is enough to check L3 to see whether the cache-line is in use by any other core sharing the L3.
  • bcronce - Wednesday, August 24, 2016 - link

    @68k
    Thanks for looking it up. I only remembered Intel talking talking about this years ago when they made the design decision in order to minimize latency. Certain operations are extremely latency sensitive, like thread synchronizations.

    The strange thing is AMD is pushing for so many cores, but then chooses a cache design that makes sharing data more expensive. What they did gain is exclusive caches tend to have more bandwidth and are great for independent threads with little sharing. It's a trade off. Nothing is free, pros and cons everywhere.
  • deltaFx2 - Wednesday, August 24, 2016 - link

    @68k,@bcronce: I guess I haven't looked up Intel's latest and greatest cache organization :) I do recall though that Neither-incl-nor-exclusive was their scheme for quite a while, probably until Sandy Bridge. Perhaps that explains why their L2 cache went from 8-way to 4-way in SkyLake; the extra associativity cannot be effectively utilized with strict inclusion as you keep adding more cores (a single set in L3 maps to a unique set in L2. If you have 16 way L3, only 16 lines that map to that set in L3 can reside in the L2s. Obviously, multiple L3 sets map to the same L2 set, so this is somewhat mitigated, but it is a glass-jaw).

    The nice thing about Intel's organization is that it's a monolithic L3 with variable latency to slices, as opposed to AMD's distributed L3. That probably is what adds the latency (if it does) on cache-to-cache transfers, not the inclusive-vs-exclusive, or the inclusive cache acting as a probe filter. You could just as easily add a separate probe filter to avoid unnecessary coherence lookups. Would you point me to that paper you quoted earlier? I have a hard time believing that the problem is the exclusive cache itself, and not the organization of the cache. Anyway, I don't know enough about AMD's design to comment, so I'll leave it at that. Thanks!
  • intangir - Wednesday, August 24, 2016 - link

    As far as I know, since Nehalem Intel's L3 caches have been fully inclusive of L1+L2, but the L1 and L2 caches are neither inclusive nor exclusive with respect to each other.
  • Ryan Smith - Tuesday, August 23, 2016 - link

    Right you are. That's a typo on our end, and in the deep dive section on cache you can see why it's exclusive. As for the first page, I've corrected the typo.
  • looncraz - Wednesday, August 24, 2016 - link

    Zen's L3 is "mostly exclusive." This changes things up a bit - it isn't a pure victim cache and will probably contain data used between multiple cores. The first access will be slower as the data is snooped from another core's L2, but then that data will be mirrored in the L3. The coherent data fabric which links multiple core complexes adds a whole new level of complexity for sharing data between cores, but I suspect a mechanism exists to synchronize global data between the L3 caches, so global data will have a copy in each L3 and actions on global data will incur a latency penalty, but nothing compared to snooping L2s across multiple core complexes.
  • NikosD - Wednesday, August 24, 2016 - link

    It seems that AMD did its job right this time.

    Most of the CPU features are in between Broadwell and Skylake architectures and this is extremely important and fast, with the exception of AVX/AVX2 instructions that are executed in 128bit chunks instead of 256bit.

    Of course we have to wait and see latencies and throughput of the rest of arithmetic instructions, but all these are just details.

    I think with Zen we will all owe a lot to AMD like the older days of 64bit CPUs and OS.

    This time the revolution will be the affordable true 8 core/ 16 thread CPU with no GPU inside for the first time in desktop.

    The key point here is price, in order to be affordable. Not like High-End Desktop systems of Intel.

    That move will force Intel to accept the fact that we, as customers, want 8 cores in our CPUs like 64bit CPUs and OS back in the past that Intel offered only with Itanium.

    All in all, AMD could possibly hold in its hands a true winner, from laptops to servers that brings us memories of AMD Athlon and Opteron CPUs.

    Well done AMD!
  • Michael Bay - Wednesday, August 24, 2016 - link

    Do we, though? General purpose software like word processors and such is literally indistinguishable on 2 and 4 cores, and a lot of things on content creation side are already accelerated by GPU.
    There are games of course, but CPU stopped being a bottleneck there long time ago.

Log in

Don't have an account? Sign up now