CCX Size

Moving down in node size brings up a number of challenges in the core and beyond. Even disregarding power and frequency, the ability to put structures into silicon and then integrate that silicon into the package, as well as providing power to the right parts of the silicon through the right connections becomes an exercise in itself. AMD gave us some insight into how 7nm changed some of its designs, as well as the packaging challenges therein.

A key metric given up by AMD relates to the core complex: four cores, the associated core structures, and then L2 and L3 caches. With 12nm and the Zen+ core, AMD stated that a single core complex was ~60 square millimeters, which separates into 44mm2 for the cores and 16mm2 for the 8MB of L3 per CCX. Add two of these 60mm2 complexes with a memory controller, PCIe lanes, four IF links, and other IO, and a Zen+ zeppelin die was 213 mm2 in total.

For Zen 2, a single chiplet is 74mm2, of which 31.3 mm2 is a core complex with 16 MB of L3. AMD did not breakdown this 31.3 number into cores and L3, but one might imagine that the L3 might be approaching 50% of that number. The reason the chiplet is so much smaller is that it doesn’t need memory controllers, it only has one IF link, and has no IO, because all of the platform requirements are on the IO die. This allows AMD to make the chiplets extremely compact. However if AMD intends to keep increasing the L3 cache, we might end up with most of the chip as L3.

Overall however, AMD has stated that the CCX (cores plus L3) has decreased in size by 47%. That is showing great scaling, especially if the +15% raw instruction throughput and increased frequency comes into play. Performance per mm2 is going to be a very exciting metric.

Packaging

With Matisse staying in the AM4 socket, and Rome in the EPYC socket, AMD stated that they had to make some bets on its packaging technology in order to maintain compatibility. Invariably some of these bets end up being tradeoffs for continual support, however AMD believes that the extra effort has been worth the continued compatibility.

One of the key points AMD spoke about with relation to packaging is how each of the silicon dies are attached to the package. In order to enable a pin-grid array desktop processor, the silicon has to be affixed to the processor in a BGA fashion. AMD stated that due to the 7nm process, the bump pitch (the distance between the solder balls on the silicon die and package) reduced from 150 microns on 12nm to 130 microns on 7nm. This doesn’t sound like much, however AMD stated that there are only two vendors in the world with technology sufficient to do this. The only alternative would be to have a bigger bit of silicon to support a larger bump pitch, ultimately leading to a lot of empty silicon (or a different design paradigm).

One of the ways in order to enable the tighter bump pitch is to adjust how the bumps are processed on the underside of the die. Normally a solder bump on a package is a blob/ball of lead-free solder, relying on the physics of surface tension and reflow to ensure it is consistent and regular. In order to enable the tighter bump pitches however, AMD had to move to a copper pillar solder bump topology.

In order to enable this feature, copper is epitaxially deposited within a mask in order to create a ‘stand’ on which the reflow solder sits. Due to the diameter of the pillar, less solder mask is needed and it creates a smaller solder radius. AMD also came across another issue, due to its dual die design inside Matisse: if the IO die uses standard solder bump masks, and the chiplets use copper pillars, there needs to be a level of height consistency for integrated heat spreaders. For the smaller copper pillars, this means managing the level of copper pillar growth.

AMD explained that it was actually easier to manage this connection implementation than it would be to build different height heatspreaders, as the stamping process used for heatspreaders would not enable such a low tolerance. AMD expects all of its 7nm designs in the future to use the copper pillar implementation.

Routing

Beyond just putting the silicon onto the organic substrate, that substrate has to manage connections between the die and externally to the die. AMD had to increase the number of substrate layers in the package to 12 for Matisse in order to handle the extra routing (no word on how many layers are required in Rome, perhaps 14). This also becomes somewhat complicated for single core chiplet and dual core chiplet processors, especially when testing the silicon before placing it onto the package.

From the diagram we can clearly see the IF links from the two chiplets going to the IO die, with the IO die also handling the memory controllers and what looks like power plane duties as well. There are no in-package links between the chiplets, in case anyone was still wondering: the chiplets have no way of direct communication – all communication between chiplets is handled through the IO die.

AMD stated that with this layout they also had to be mindful of how the processor was placed in the system, as well as cooling and memory layout. Also, when it comes to faster memory support, or the tighter tolerances of PCIe 4.0, all of this also needs to be taken into consideration as provide the optimal path for signaling without interference from other traces and other routing.

New Instructions: Cache and Memory Bandwidth QoS Control AMD Zen 2 Microarchitecture Overview: The Quick Analysis
Comments Locked

216 Comments

View All Comments

  • wurizen - Friday, June 14, 2019 - link

    flex^^^
  • wurizen - Friday, June 14, 2019 - link

    OMFG! I. Am. Not. Talking. About. Intel. Mesh.

    I. Am. Talking. About. Infinity. Fabric. High. Memory. Latency!

    Now that I got that off my chest, let's proceed shall we...

    OMFG!

    L3 Cache? WTF!

    Do you think you're so clever to talk about L3 cache to show off your knowledge as if to convince ppl here you know something? Nah, man!

    WTF are you talking about L3 cache, dude? Come on, dude, get with the program.

    The program is "Cross-CCX-High-Memory-Latency" with Infinity Fabric 1.0

    And, games (BO3, BF1, BF4 from my testing) are what is affected by this high latency penalty in real-time. Imagine playing a game of BO3 while throughout the game, the game is "micro-pausing" "Micro-slow-motioning" repeatedly throughout the match? Yep, you got it, it makes it unplayeable.

    In productive work like video editing, I would not see the high latency as an issue unless it affects "timeline editing" causing it to lag, as well.

    I have heard some complain issues with it in audio editing with audio work. But I don't do that so I can't say.

    As for "compute-intensive applications (y'know, real work)" --delatFx2

    ....

    .....

    ......

    You duh man, bruh! a real compute-intensive, man!

    "This article mentions a Windows 10 patch to ensure that threads get assigned to the same CCX before going to the adjacent one." --deltaFx2

    Uhhh... that won't fix it. Only AMD can fix it in Infinity Fabric 2.0 (Ryzen 2), if, indeed, AMD has fixed it. By making it faster! And/or, reducing that ~110ns latency to around 69ns.

    Now, my question is, and you (deltaFx2) hasn't mentioned it in your wise-response to my comments is that SLIDE of "Raw Memory Performance" showing 69ns latency at 3200 Mhz RAM. Is that Raw memory performance Intra-CCX-Memory-Performance or Inter-core-Memory-Performance? Bada-boom, bish!
  • wurizen - Friday, June 14, 2019 - link

    it's a problem ppl are having, if you search enough....
  • Alistair - Wednesday, June 12, 2019 - link

    those kinds of micro stutters are usually motherboard or most likely your windows installation causing it, reinstall windows, then try a different motherboard maybe
  • wurizen - Wednesday, June 12, 2019 - link

    Wow, really? Re-install windows?

    I just wanna know (cough, cough Anand) what the Cross-CCX-Latency is for Ryzen 2 and Infinity Fabric 2.0.

    If, it is still ~110ns like before.... well, guess what? 110 nano-effin-seconds is not fast enough. It's too HIGH a latency!

    You can't update bios/motherboard or re-install windows, or get 6000 Mhz RAM (the price for that, tjo?) to fix it. (As shown in the graph for whatever "Raw Memory Latency" is for that 3200 Mhz RAM to 3600 Mhz stays at 69 ns and only at 37333 Mhz RAM does it drop to 67ns?).... This is the same result PCPER did with Ryzen IF 1.0 showing that getting Faster RAM at 3200 Mhz did not improve the Cross-CCX-Memory-Latency....
  • supdawgwtfd - Thursday, June 13, 2019 - link

    O don't get any stutters with my 1600.

    As above. It's nothing to do with the CPU directly.

    Something else is causing the problem.
  • deltaFx2 - Thursday, June 13, 2019 - link

    How so you know for sure that the microstutter or whatever it is you think you are facing is due to the inter-ccx latency? Did you actually pin threads to CCXs to confirm this theory? Do you know when inter-ccx latency even comes into play? Inter-ccx latency ONLY matters for shared memory being modified by different threads; this should be a tiny fraction of your execution time, otherwise you are not much better going multithreaded. Moreover, Each CCX runs 8 threads so are you saying your game uses more than 8? That would be an interesting game indeed, given that intel's mainstream gaming CPUs don't have a problem on 4c8t.

    To me, you've just jumped the the gun and gone from "I have got some microstutter issues" to "I know PCPer ran some microbenchmark to find out the latency" to "that must be the problem". It does not follow.
  • FreckledTrout - Thursday, June 13, 2019 - link

    I agree. If micro stutter from CCX latency was really occurring this would be a huge issue. These issues really have to be something unrelated.
  • wurizen - Friday, June 14, 2019 - link

    Another thing that was weird was GPU usage drop from 98% to like 0% in-game, midst-action, while I was playing... constantly, in a repeated pattern throughout the game... this is not a server or games hitching. we understand as gamers that a game will "hitch" once in a while. this is like "slow-motion" "micro-pause" thing happening through out the game. happens in single player (BF1) so I ruled out server-side. It's like the game goes in "slow-motion" for a second... not once or twice in a match, per se. But, throughout and in a repeated constant fashion... along with seeing GPU usage to accompany the effect dropping from 98% or so (normal) to 0% for split seconds (again, not once or twice in a match; but a constant, repeated pattern throughout the match)

    And, there are people having head-scratching issues similar to me with Ryzen CPU.

    No one (cough, cough Anand; nor youtube tech tubers will address it) seems to address it tho.

    But, I think that Ryzen 2 is coming out and if Cross-CCX-High-LAtency-Issue is the same, then we're bound to hear more. I'm sure.

    I am thinking tech sites are giving AMD a chance... but not sure... doesn't matter tho. I got a 7700k (I wanted the 8-core thing when 1700x Ryzen came out) but its fine. Im not a fanboy. Just a techboy.... if anything...
  • wurizen - Friday, June 14, 2019 - link

    The "micro-stutter" or "micro-pausing" is not once or twice (I get those with Intel, as well) but, a repeated, constant pattern throughout the match and round of game. The "micro-stutter" and "micro-pause" also "FEELS" different than what I felt with my prior 3700K CPU and current 7700K CPU. It's like a "micro-slow-motion." I am not making this up. I am not crazy!

Log in

Don't have an account? Sign up now