CCX Size

Moving down in node size brings up a number of challenges in the core and beyond. Even disregarding power and frequency, the ability to put structures into silicon and then integrate that silicon into the package, as well as providing power to the right parts of the silicon through the right connections becomes an exercise in itself. AMD gave us some insight into how 7nm changed some of its designs, as well as the packaging challenges therein.

A key metric given up by AMD relates to the core complex: four cores, the associated core structures, and then L2 and L3 caches. With 12nm and the Zen+ core, AMD stated that a single core complex was ~60 square millimeters, which separates into 44mm2 for the cores and 16mm2 for the 8MB of L3 per CCX. Add two of these 60mm2 complexes with a memory controller, PCIe lanes, four IF links, and other IO, and a Zen+ zeppelin die was 213 mm2 in total.

For Zen 2, a single chiplet is 74mm2, of which 31.3 mm2 is a core complex with 16 MB of L3. AMD did not breakdown this 31.3 number into cores and L3, but one might imagine that the L3 might be approaching 50% of that number. The reason the chiplet is so much smaller is that it doesn’t need memory controllers, it only has one IF link, and has no IO, because all of the platform requirements are on the IO die. This allows AMD to make the chiplets extremely compact. However if AMD intends to keep increasing the L3 cache, we might end up with most of the chip as L3.

Overall however, AMD has stated that the CCX (cores plus L3) has decreased in size by 47%. That is showing great scaling, especially if the +15% raw instruction throughput and increased frequency comes into play. Performance per mm2 is going to be a very exciting metric.

Packaging

With Matisse staying in the AM4 socket, and Rome in the EPYC socket, AMD stated that they had to make some bets on its packaging technology in order to maintain compatibility. Invariably some of these bets end up being tradeoffs for continual support, however AMD believes that the extra effort has been worth the continued compatibility.

One of the key points AMD spoke about with relation to packaging is how each of the silicon dies are attached to the package. In order to enable a pin-grid array desktop processor, the silicon has to be affixed to the processor in a BGA fashion. AMD stated that due to the 7nm process, the bump pitch (the distance between the solder balls on the silicon die and package) reduced from 150 microns on 12nm to 130 microns on 7nm. This doesn’t sound like much, however AMD stated that there are only two vendors in the world with technology sufficient to do this. The only alternative would be to have a bigger bit of silicon to support a larger bump pitch, ultimately leading to a lot of empty silicon (or a different design paradigm).

One of the ways in order to enable the tighter bump pitch is to adjust how the bumps are processed on the underside of the die. Normally a solder bump on a package is a blob/ball of lead-free solder, relying on the physics of surface tension and reflow to ensure it is consistent and regular. In order to enable the tighter bump pitches however, AMD had to move to a copper pillar solder bump topology.

In order to enable this feature, copper is epitaxially deposited within a mask in order to create a ‘stand’ on which the reflow solder sits. Due to the diameter of the pillar, less solder mask is needed and it creates a smaller solder radius. AMD also came across another issue, due to its dual die design inside Matisse: if the IO die uses standard solder bump masks, and the chiplets use copper pillars, there needs to be a level of height consistency for integrated heat spreaders. For the smaller copper pillars, this means managing the level of copper pillar growth.

AMD explained that it was actually easier to manage this connection implementation than it would be to build different height heatspreaders, as the stamping process used for heatspreaders would not enable such a low tolerance. AMD expects all of its 7nm designs in the future to use the copper pillar implementation.

Routing

Beyond just putting the silicon onto the organic substrate, that substrate has to manage connections between the die and externally to the die. AMD had to increase the number of substrate layers in the package to 12 for Matisse in order to handle the extra routing (no word on how many layers are required in Rome, perhaps 14). This also becomes somewhat complicated for single core chiplet and dual core chiplet processors, especially when testing the silicon before placing it onto the package.

From the diagram we can clearly see the IF links from the two chiplets going to the IO die, with the IO die also handling the memory controllers and what looks like power plane duties as well. There are no in-package links between the chiplets, in case anyone was still wondering: the chiplets have no way of direct communication – all communication between chiplets is handled through the IO die.

AMD stated that with this layout they also had to be mindful of how the processor was placed in the system, as well as cooling and memory layout. Also, when it comes to faster memory support, or the tighter tolerances of PCIe 4.0, all of this also needs to be taken into consideration as provide the optimal path for signaling without interference from other traces and other routing.

New Instructions: Cache and Memory Bandwidth QoS Control AMD Zen 2 Microarchitecture Overview: The Quick Analysis
Comments Locked

216 Comments

View All Comments

  • GreenReaper - Wednesday, June 12, 2019 - link

    The last you heard? It says clearly on page 6 that there is "single-op" AVX 256, and on page 9 explicitly that the width has been increased to 256 bits:
    https://www.anandtech.com/show/14525/amd-zen-2-mic...
    https://www.anandtech.com/show/14525/amd-zen-2-mic...

    To be honest, I don't mind how it's implemented as long as the real-world performance is there at a reasonable price and power budget. It'll be interesting to see the difference in benchmarks.
  • arashi - Wednesday, June 12, 2019 - link

    Don't expect too much cognitive abilities regarding AMD from HStewart, his pay from big blue depends on his misinformation disguised as misunderstanding.
  • Qasar - Thursday, June 13, 2019 - link

    HA ! so that explains it..... the more misinformation and misunderstanding he spreads.. the more he gets paid.......
  • HStewart - Thursday, June 13, 2019 - link

    I don't get paid for any of this - I just not extremely heavily AMD bias like a lot of people here. It just really interesting to me when Intel release information about new Ice Lake processor with 2 load / s store processor that with in a a couple days here bla bla about Zen+++. Just because 7nm does not mean they change much.

    Maybe AMD did change it 256 width - and not dual 128, they should be AVX 2 has been that way for a long time and Ice Lake is now 512. Maybe by time of Zen 4 or Zen+++++ it will be AVX 512 support.
  • Korguz - Thursday, June 13, 2019 - link

    no.. but it is known.. you are heavily intel bias..

    whats zen +++++++++ ????
    x 86-512 ??????
    but you are usually the one spreading misinformation about amd...
    " and support for single-operation AVX-256 (or AVX2). AMD has stated that there is no frequency penalty for AVX2 " " AMD has increased the execution unit width from 128-bit to 256-bit, allowing for single-cycle AVX2 calculations, rather than cracking the calculation into two instructions and two cycles. This is enhanced by giving 256-bit loads and stores, so the FMA units can be continuously fed. "
  • HStewart - Thursday, June 13, 2019 - link

    Zen+++++ was my joke as every AMD fan jokes about Intel 10+++ Just get over it

    x-86 512 - is likely not going to happen, it just to make sure people are not confusing vector processing bits with cpu bits 64 bit is what most os uses now. for last decade or so

    Intel has been using 256 AVX 2 since day one, the earlier version of AMD chips on only had two combine 128 bit - did they fix this with Zen 2 - this is of course different that AVX 512. which standard in in all Ice Lake and higher cpus and older Xeon's.
  • Qasar - Thursday, June 13, 2019 - link

    sorry HStewart... but even sone intel fans are making fun of the 14++++++ and it would be funny.. if you were making fun of the process node.. not the architeCture...
    "
    x-86 512 - is likely not going to happen, it just to make sure people are not confusing vector processing bits with cpu bits 64 bit is what most os uses now. for last decade or so " that makes NO sense...
  • HStewart - Thursday, June 13, 2019 - link

    One more thing I stay away from AMD unless there are one that bias against Intel like spreading misinformation that AVX 512 is misleading. and it really not 512 surely they do not have proof of that.

    AVX 512 is not the same as x86-512, I seriously doubt we will ever need that that but then at time people didn't think we need x86-64 - I remember original day of 8088,. no body thought we needed more 64meg AVX-512 is for vectors which is totally different.
  • just4U - Thursday, June 13, 2019 - link

    I always have a higher end Intel setup and normally a AMD setup as well.. plus I build a fair amount of setups on both. No bias here except maybe.. wanting AMD to be competitive. The news that dropped over the past month was the biggest for AMD in over a decade HS.. If you can't even acknowledge that (even grudgingly..) then geez.. I dunno.

    This has been awesome news for the industry and will put intel on their toes to do better. Be happy about it.
  • Xyler94 - Monday, June 17, 2019 - link

    HStewart, please. You don't stay away from AMD at all. You take ANY opportunity to try and make Intel look better than AMD.

    There was an article, it was Windows on ARM. You somehow managed to make a post about Intel winning over AMD. Don't spew that BS. People don't hate Intel as much as you make them out to be, they don't like you glorifying Intel.

Log in

Don't have an account? Sign up now