New Instructions

Cache and Memory Bandwidth QoS Control

As with most new x86 microarchitectures, there is a drive to increase performance through new instructions, but also try for parity between different vendors in what instructions are supported. For Zen 2, while AMD is not catering to some of the more exotic instruction sets that Intel might do, it is adding in new instructions in three different areas.

The first one, CLWB, has been seen before from Intel processors in relation to non-volatile memory. This instruction allows the program to push data back into the non-volatile memory, just in case the system receives a halting command and data might be lost. There are other instructions associated with securing data to non-volatile memory systems, although this wasn’t explicitly commented on by AMD. It could be an indication that AMD is looking to better support non-volatile memory hardware and structures in future designs, particularly in its EPYC processors.

The second cache instruction, WBNOINVD, is an AMD-only command, but builds on other similar commands such as WBINVD. This command is designed to predict when particular parts of the cache might be needed in the future, and clears them up ready in order to accelerate future calculations. In the event that the cache line needed isn’t ready, a flush command would be processed in advance of the needed operation, increasing latency – by running a cache line flush in advance while the latency-critical instruction is still coming down the pipe helps accelerate its ultimate execution.

The final set of instructions, filed under QoS, actually relates to how cache and memory priorities are assigned.

When a cloud CPU is split into different containers or VMs for different customers, the level of performance is not always consistent as performance could be limited based on what another VM is doing on the system. This is known as the ‘noisy neighbor’ issue: if someone else is eating all the core-to-memory bandwidth, or L3 cache, it can be very difficult for another VM on the system to have access to what it needs. As a result of that noisy neighbor, the other VM will have a highly variable latency on how it can process its workload. Alternatively, if a mission critical VM is on a system and another VM keeps asking for resources, the mission critical one might end up missing its targets as it doesn’t have all the resources it needs access to.

Dealing with noisy neighbors, beyond ensuring full access to the hardware as a single user, is difficult. Most cloud providers and operations won’t even tell you if you have any neighbors, and in the event of live VM migration, those neighbors might change very frequently, so there is no guarantee of sustained performance at any time. This is where a set of dedicated QoS (Quality of Service) instructions come in.

As with Intel’s implementation, when a series of VMs is allocated onto a system on top of a hypervisor, the hypervisor can control how much memory bandwidth and cache that each VM has access to. If a mission critical 8-core VM requires access to 64 MB of L3 and at least 30 GB/s of memory bandwidth, the hypervisor can control that the priority VM will always have access to that amount, and either eliminate it entirely from the pool for other VMs, or intelligently restrict the requirements as the mission critical VM bursts into full access.

Intel only enables this feature on its Xeon Scalable processors, however AMD will enable it up and down its Zen 2 processor family range, for consumers and enterprise users.

The immediate issue I had with this feature is on the consumer side. Imagine if a video game demands access to all the cache and all the memory bandwidth, while some streaming software would get access to none – it could cause havoc on the system. AMD explained that while technically individual programs can request a certain level of QoS, however it will be up to the OS or the hypervisor to control if those requests are both valid and suitable. They see this feature more as an enterprise feature used when hypervisors are in play, rather than bare metal installations on consumer systems.

Windows Optimizations and Security CCX Size, Packaging, and Routing: 7nm Challenges
Comments Locked

216 Comments

View All Comments

  • Ratman6161 - Friday, June 14, 2019 - link

    Better yet, why even bother talking about it? I read these architecture articles and find them interesting, but I'll spend my money based on real world performance.
  • Notmyusualid - Sunday, July 7, 2019 - link

    @ Ratman - aye, I give this all passing attention too. Hoping one day another 'Conroe' moment lands at our feet.
  • RedGreenBlue - Tuesday, June 11, 2019 - link

    The immediate value at these price points is the multithreading. Even ignoring the CPU cost, the motherboard costs of Zen 2 on AM4 can be substantially cheaper than the threadripper platform. Also, keep in mind what AMD did soon after the Zen 1000 series launch, and, I think, Zen 2 launch to a degree. They knocked down the prices pretty substantially. The initial pricing is for early adopters with less price sensitivity and who have been holding off upgrading as long as possible and are ready to spring for something. 3 months or so from launch these prices may be reduced officially, if not unofficially by 3rd parties.
  • RedGreenBlue - Tuesday, June 11, 2019 - link

    *Meant to say Z+ launch, not Zen 2.
  • Spoelie - Wednesday, June 12, 2019 - link

    To be fair, those price drops were also partially instigated by CPU launches from Intel - companies typically don't lower prices automatically, usually it is from competitive pressure or low sales.
  • just4U - Thursday, June 13, 2019 - link

    I don't believe that's true at all S. Pricing was already lower than the 8th gen Intels and the 9th while adding cores wasn't competing against the Ryzens any more than the older series..
  • sing_electric - Friday, June 14, 2019 - link

    That's true, but by most indications, if you want the "full" AM4 experience, you'll be paying more than you did previously because the 500-series motherboards will cost significantly more - I'm sure that TR boards will see an increase, too, but I think, proportionately, it might be smaller (because the cost increase for say, PCIe 4.0 is probably a fixed dollar amount, give or take).
  • mode_13h - Tuesday, June 11, 2019 - link

    Huh? There've been lots of Intel generations that did not generate those kinds of performance gains, and Intel has not introduced a newer product at a lower price point, since at least the Core i-series. So, I have no idea where you get this 10-15% perf per dollar figure.
  • Irata - Tuesday, June 11, 2019 - link

    So who does innovate in your humble opinion ?
    Looking at your posts, you seem to confuse / jumble quite a lot of things.
    Example TSMC: So yes, they are giving AMD a better manufacturing that allows them to offer more transistors per area or lower power use at the same clock speed.
    But better perf/ $ ? Not sure - that all depends on the price per good die, i.e. yields, price etc. all play a role and I assume you do not know any of this data.

    Moores law - Alx already covered that...

    As for the 16 core - what would the ideal price be for you ? $199 ? What do the alternatives cost (CPU + HSF and total platform cost).

    If you want to look a price - yes, it did go up compared to the 2xxx series, but compared to the first Ryzen (2017), you do get quite a lot more than you did with the original Ryzen.

    1800x 8C/16T 3,6 Ghz base / 4 Ghz boost for $499
    3900x 12C/24T 3.8 Ghz base / 4,6 Ghz boost for $499

    Now the 2700x was only $329, but its counterpart the 3700x has the same price, roughly the same frequency but a lower power consumption and supposedly better performance in just the range you mention.
  • Spunjji - Tuesday, June 11, 2019 - link

    Nice comprehensive summary there!

Log in

Don't have an account? Sign up now