New Instructions

Cache and Memory Bandwidth QoS Control

As with most new x86 microarchitectures, there is a drive to increase performance through new instructions, but also try for parity between different vendors in what instructions are supported. For Zen 2, while AMD is not catering to some of the more exotic instruction sets that Intel might do, it is adding in new instructions in three different areas.

The first one, CLWB, has been seen before from Intel processors in relation to non-volatile memory. This instruction allows the program to push data back into the non-volatile memory, just in case the system receives a halting command and data might be lost. There are other instructions associated with securing data to non-volatile memory systems, although this wasn’t explicitly commented on by AMD. It could be an indication that AMD is looking to better support non-volatile memory hardware and structures in future designs, particularly in its EPYC processors.

The second cache instruction, WBNOINVD, is an AMD-only command, but builds on other similar commands such as WBINVD. This command is designed to predict when particular parts of the cache might be needed in the future, and clears them up ready in order to accelerate future calculations. In the event that the cache line needed isn’t ready, a flush command would be processed in advance of the needed operation, increasing latency – by running a cache line flush in advance while the latency-critical instruction is still coming down the pipe helps accelerate its ultimate execution.

The final set of instructions, filed under QoS, actually relates to how cache and memory priorities are assigned.

When a cloud CPU is split into different containers or VMs for different customers, the level of performance is not always consistent as performance could be limited based on what another VM is doing on the system. This is known as the ‘noisy neighbor’ issue: if someone else is eating all the core-to-memory bandwidth, or L3 cache, it can be very difficult for another VM on the system to have access to what it needs. As a result of that noisy neighbor, the other VM will have a highly variable latency on how it can process its workload. Alternatively, if a mission critical VM is on a system and another VM keeps asking for resources, the mission critical one might end up missing its targets as it doesn’t have all the resources it needs access to.

Dealing with noisy neighbors, beyond ensuring full access to the hardware as a single user, is difficult. Most cloud providers and operations won’t even tell you if you have any neighbors, and in the event of live VM migration, those neighbors might change very frequently, so there is no guarantee of sustained performance at any time. This is where a set of dedicated QoS (Quality of Service) instructions come in.

As with Intel’s implementation, when a series of VMs is allocated onto a system on top of a hypervisor, the hypervisor can control how much memory bandwidth and cache that each VM has access to. If a mission critical 8-core VM requires access to 64 MB of L3 and at least 30 GB/s of memory bandwidth, the hypervisor can control that the priority VM will always have access to that amount, and either eliminate it entirely from the pool for other VMs, or intelligently restrict the requirements as the mission critical VM bursts into full access.

Intel only enables this feature on its Xeon Scalable processors, however AMD will enable it up and down its Zen 2 processor family range, for consumers and enterprise users.

The immediate issue I had with this feature is on the consumer side. Imagine if a video game demands access to all the cache and all the memory bandwidth, while some streaming software would get access to none – it could cause havoc on the system. AMD explained that while technically individual programs can request a certain level of QoS, however it will be up to the OS or the hypervisor to control if those requests are both valid and suitable. They see this feature more as an enterprise feature used when hypervisors are in play, rather than bare metal installations on consumer systems.

Windows Optimizations and Security CCX Size, Packaging, and Routing: 7nm Challenges
Comments Locked

216 Comments

View All Comments

  • Teutorix - Tuesday, June 11, 2019 - link

    If TDPs are accurate they should reflect power consumption.

    If a chip needs 95W cooling it's using 95W of power. The heat doesn't come out of nowhere.
  • zmatt - Tuesday, June 11, 2019 - link

    I think technically it would be drawing a more than its TDP. The heat generated by electronics is waste due to the inefficiency of semi conductors. If you had a perfect conductor with zero resistance in a perfect world then it shouldn't make any heat. However the TDP cannot exceed power draw as that's where the heat comes from. How much TDP differs from power draw would depend on a lot of things such as what material the semiconductor is made or, silicon, germanium etc. And I'm sure design also factors in a great deal.

    If you read Gamers Nexus, they occasionally measure real power draw on systems, https://www.gamersnexus.net/hwreviews/3066-intel-i...
    And you can see that draw massively exceeds TDP in some cases, especially at the high end. This makes sense, if semiconductors were only 10% efficient then they wouldn't perform nearly as well as they do.
  • Teutorix - Tuesday, June 11, 2019 - link

    "I think technically it would be drawing a more than its TDP"

    Yeah, but if a chip is drawing more power than its TDP it is also producing more heat than its TDP. Making the TDP basically a lie.

    "The heat generated by electronics is waste due to the inefficiency of semi conductors. If you had a perfect conductor with zero resistance in a perfect world then it shouldn't make any heat"

    Essentially yes, there is a lower limit on power consumption but its many orders of magnitude below where we are today.

    "How much TDP differs from power draw would depend on a lot of things such as what material the semiconductor is made or, silicon, germanium etc. And I'm sure design also factors in a great deal."

    No. TDP = the "intended" thermal output of the device. The themal output is directly equal to the power input. There's nothing that will ever change that. If your chip is drawing 200W, its outputting 200W of heat, end of story.

    Intel defines TDP at base clocks, but nobody expects a CPU to sit at base clocks even in extended workloads. So when you have a 9900k for example its TDP is 95W, but only when its at 3.6GHz. If you get up to its all core boost of 4.7 its suddenly draining 200W sustained assuming you have enough cooling.

    Speaking of cooling. If you buy a 9900k with a 95W TDP you'd be forgiven for thinking that a hyper 212 with a max capacity of 180W would be more than capable of handling this chip. NOPE. Say goodbye to that 4.7GHz all core boost.

    "If you read Gamers Nexus, they occasionally measure real power draw on systems, https://www.gamersnexus.net/hwreviews/3066-intel-i...
    And you can see that draw massively exceeds TDP in some cases, especially at the high end. This makes sense, if semiconductors were only 10% efficient then they wouldn't perform nearly as well as they do."

    None of that makes any difference. TDP is supposed to represent the cooling capacity needed for the chip. If a "95W" chip can't be sufficiently cooled by a 150W cooler there's a problem.

    Both Intel and AMD need to start quoting TDPs that match the boost frequencies they use to market the chips.
  • Cooe - Tuesday, June 11, 2019 - link

    ... AMD DOES include boost in their TDP calculations (unlike Intel), and always have. They make their methodology for this calculation freely available & explicit.
  • Spoelie - Wednesday, June 12, 2019 - link

    Look at these power tables for 2700X
    https://www.anandtech.com/show/12625/amd-second-ge...

    =>You are only hitting 'TDP' figures at close to full loading, so "frequency max" is not limited by TDP but by the silicon.
    =>Slightly lowering frequency *and voltage* really adds up the power savings over many cores. The load table of the 3700 will look on the whole different than for the 3600X. The 3700 will probably lose out in some medium threaded scenarios (not lightly and not heavily threaded)
  • Gastec - Wednesday, June 12, 2019 - link

    That's not actually the real power consumption. Most likely you will get a 3700X with 70-75 W (according to the software app indications) but a bit more if tested with a multimeter. Add to that the inefficiency of the PSU, say 85-90%, and you have about 85 W of real power consumption. Somewhat better than my current 110W i7-860 or the 150+W Intel 9000 series ones I would say :)
  • xrror - Monday, June 10, 2019 - link

    funny you say that. AMD TDP and Intel TDP differ. I think.

    HEY IAN, does AMD still measure TDP as "real" (total) dissipation power or Intel's weaksauce "Typical" dissipation power?
  • Teutorix - Tuesday, June 11, 2019 - link

    Intel rate TDP at base clocks. AMD do something a little more complex.

    Neither of them reflect real world power consumption for sustained workloads.
  • FreckledTrout - Tuesday, June 11, 2019 - link

    In desktops they are simply starting points for the cooling solution needed. They do a lot better in the laptop/tablet space where TDP's make or break designs.
  • Cooe - Tuesday, June 11, 2019 - link

    Yes they do. A 2700X pulls almost exactly 105W under the kind of conditions you describe. Just because Intel's values are completely nonsense doesn't mean they all are.

Log in

Don't have an account? Sign up now