Decode

For the decode stage, the main uptick here is the micro-op cache. By doubling in size from 2K entry to 4K entry, it will hold more decoded operations than before, which means it should experience a lot of reuse. In order to facilitate that use, AMD has increased the dispatch rate from the micro-op cache into the buffers up to 8 fused instructions. Assuming that AMD can bypass its decoders often, this should be a very efficient block of silicon.

What makes the 4K entry more impressive is when we compare it to the competition. In Intel’s Skylake family, the micro-op cache in those cores are only 1.5K entry. Intel increased the size by 50% for Ice Lake to 2.25K, but that core is coming to mobile platforms later this year and perhaps to servers next year. By comparison AMD’s Zen 2 core will cover the gamut from consumer to enterprise. Also at this time we can compare it to Arm’s A77 CPU micro-op cache, which is 1.5K entry, however that cache is Arm’s first micro-op cache design for a core.

The decoders in Zen 2 stay the same, we still have access to four complex decoders (compared to Intel’s 1 complex + 4 simple decoders), and decoded instructions are cached into the micro-op cache as well as dispatched into the micro-op queue.

AMD has also stated that it has improved its micro-op fusion algorithm, although did not go into detail as to how this affects performance. Current micro-op fusion conversion is already pretty good, so it would be interesting to see what AMD have done here. Compared to Zen and Zen+, based on the support for AVX2, it does mean that the decoder doesn’t need to crack an AVX2 instruction into two micro-ops: AVX2 is now a single micro-op through the pipeline.

Going beyond the decoders, the micro-op queue and dispatch can feed six micro-ops per cycle into the schedulers. This is slightly imbalanced however, as AMD has independent integer and floating point schedulers: the integer scheduler can accept six micro-ops per cycle, whereas the floating point scheduler can only accept four. The dispatch can simultaneously send micro-ops to both at the same time however.

Fetch/Prefetch Floating Point
Comments Locked

216 Comments

View All Comments

  • Ratman6161 - Friday, June 14, 2019 - link

    Better yet, why even bother talking about it? I read these architecture articles and find them interesting, but I'll spend my money based on real world performance.
  • Notmyusualid - Sunday, July 7, 2019 - link

    @ Ratman - aye, I give this all passing attention too. Hoping one day another 'Conroe' moment lands at our feet.
  • RedGreenBlue - Tuesday, June 11, 2019 - link

    The immediate value at these price points is the multithreading. Even ignoring the CPU cost, the motherboard costs of Zen 2 on AM4 can be substantially cheaper than the threadripper platform. Also, keep in mind what AMD did soon after the Zen 1000 series launch, and, I think, Zen 2 launch to a degree. They knocked down the prices pretty substantially. The initial pricing is for early adopters with less price sensitivity and who have been holding off upgrading as long as possible and are ready to spring for something. 3 months or so from launch these prices may be reduced officially, if not unofficially by 3rd parties.
  • RedGreenBlue - Tuesday, June 11, 2019 - link

    *Meant to say Z+ launch, not Zen 2.
  • Spoelie - Wednesday, June 12, 2019 - link

    To be fair, those price drops were also partially instigated by CPU launches from Intel - companies typically don't lower prices automatically, usually it is from competitive pressure or low sales.
  • just4U - Thursday, June 13, 2019 - link

    I don't believe that's true at all S. Pricing was already lower than the 8th gen Intels and the 9th while adding cores wasn't competing against the Ryzens any more than the older series..
  • sing_electric - Friday, June 14, 2019 - link

    That's true, but by most indications, if you want the "full" AM4 experience, you'll be paying more than you did previously because the 500-series motherboards will cost significantly more - I'm sure that TR boards will see an increase, too, but I think, proportionately, it might be smaller (because the cost increase for say, PCIe 4.0 is probably a fixed dollar amount, give or take).
  • mode_13h - Tuesday, June 11, 2019 - link

    Huh? There've been lots of Intel generations that did not generate those kinds of performance gains, and Intel has not introduced a newer product at a lower price point, since at least the Core i-series. So, I have no idea where you get this 10-15% perf per dollar figure.
  • Irata - Tuesday, June 11, 2019 - link

    So who does innovate in your humble opinion ?
    Looking at your posts, you seem to confuse / jumble quite a lot of things.
    Example TSMC: So yes, they are giving AMD a better manufacturing that allows them to offer more transistors per area or lower power use at the same clock speed.
    But better perf/ $ ? Not sure - that all depends on the price per good die, i.e. yields, price etc. all play a role and I assume you do not know any of this data.

    Moores law - Alx already covered that...

    As for the 16 core - what would the ideal price be for you ? $199 ? What do the alternatives cost (CPU + HSF and total platform cost).

    If you want to look a price - yes, it did go up compared to the 2xxx series, but compared to the first Ryzen (2017), you do get quite a lot more than you did with the original Ryzen.

    1800x 8C/16T 3,6 Ghz base / 4 Ghz boost for $499
    3900x 12C/24T 3.8 Ghz base / 4,6 Ghz boost for $499

    Now the 2700x was only $329, but its counterpart the 3700x has the same price, roughly the same frequency but a lower power consumption and supposedly better performance in just the range you mention.
  • Spunjji - Tuesday, June 11, 2019 - link

    Nice comprehensive summary there!

Log in

Don't have an account? Sign up now