Windows Optimizations

One of the key points that have been a pain in the side of non-Intel processors using Windows has been the optimizations and scheduler arrangements in the operating system. We’ve seen in the past how Windows has not been kind to non-Intel microarchitecture layouts, such as AMD’s previous module design in Bulldozer, the Qualcomm hybrid CPU strategy with Windows on Snapdragon, and more recently with multi-die arrangements on Threadripper that introduce different memory latency domains into consumer computing.

Obviously AMD has a close relationship with Microsoft when it comes down to identifying a non-regular core topology with a processor, and the two companies work towards ensuring that thread and memory assignments, absent of program driven direction, attempt to make the most out of the system. With the May 10th update to Windows, some additional features have been put in place to get the most out of the upcoming Zen 2 microarchitecture and Ryzen 3000 silicon layouts.

The optimizations come on two fronts, both of which are reasonably easy to explain.

Thread Grouping

The first is thread allocation. When a processor has different ‘groups’ of CPU cores, there are different ways in which threads are allocated, all of which have pros and cons. The two extremes for thread allocation come down to thread grouping and thread expansion.

Thread grouping is where as new threads are spawned, they will be allocated onto cores directly next to cores that already have threads. This keeps the threads close together, for thread-to-thread communication, however it can create regions of high power density, especially when there are many cores on the processor but only a couple are active.

Thread expansion is where cores are placed as far away from each other as possible. In AMD’s case, this would mean a second thread spawning on a different chiplet, or a different core complex/CCX, as far away as possible. This allows the CPU to maintain high performance by not having regions of high power density, typically providing the best turbo performance across multiple threads.

The danger of thread expansion is when a program spawns two threads that end up on different sides of the CPU. In Threadripper, this could even mean that the second thread was on a part of the CPU that had a long memory latency, causing an imbalance in the potential performance between the two threads, even though the cores those threads were on would have been at the higher turbo frequency.

Because of how modern software, and in particular video games, are now spawning multiple threads rather than relying on a single thread, and those threads need to talk to each other, AMD is moving from a hybrid thread expansion technique to a thread grouping technique. This means that one CCX will fill up with threads before another CCX is even accessed. AMD believes that despite the potential for high power density within a chiplet, while the other might be inactive, is still worth it for overall performance.

For Matisse, this should afford a nice improvement for limited thread scenarios, and on the face of the technology, gaming. It will be interesting to see how much of an affect this has on the upcoming EPYC Rome CPUs or future Threadripper designs. The single benchmark AMD provided in its explanation was Rocket League at 1080p Low, which reported a +15% frame rate gain.

Clock Ramping

For any of our users familiar with our Skylake microarchitecture deep dive, you may remember that Intel introduced a new feature called Speed Shift that enabled the processor to adjust between different P-states more freely, as well as ramping from idle to load very quickly – from 100 ms to 40ms in the first version in Skylake, then down to 15 ms with Kaby Lake. It did this by handing P-state control back from the OS to the processor, which reacted based on instruction throughput and request. With Zen 2, AMD is now enabling the same feature.

AMD already has sufficiently more granularity in its frequency adjustments over Intel, allowing for 25 MHz differences rather than 100 MHz differences, however enabling a faster ramp-to-load frequency jump is going to help AMD when it comes to very burst-driven workloads, such as WebXPRT (Intel’s favorite for this sort of demonstration). According to AMD, the way that this has been implemented with Zen 2 will require BIOS updates as well as moving to the Windows May 10th update, but it will reduce frequency ramping from ~30 milliseconds on Zen to ~1-2 milliseconds on Zen 2. It should be noted that this is much faster than the numbers Intel tends to provide.

The technical name for AMD’s implementation involves CPPC2, or Collaborative Power Performance Control 2, and AMD’s metrics state that this can increase burst workloads and also application loading. AMD cites a +6% performance gain in application launch times using PCMark10’s app launch sub-test.

Hardened Security for Zen 2

Another aspect to Zen 2 is AMD’s approach to heightened security requirements of modern processors. As has been reported, a good number of the recent array of side channel exploits do not affect AMD processors, primarily because of how AMD manages its TLB buffers that have always required additional security checks before most of this became an issue. Nonetheless, for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them.

The change here comes for the Speculative Store Bypass, known as Spectre v4, which AMD now has additional hardware to work in conjunction with the OS or virtual memory managers such as hypervisors in order to control. AMD doesn’t expect any performance change from these updates. Newer issues such as Foreshadow and Zombieload do not affect AMD processors.

Performance Claims of Zen 2 New Instructions: Cache and Memory Bandwidth QoS Control
Comments Locked

216 Comments

View All Comments

  • nonoverclock - Wednesday, June 12, 2019 - link

    It's related to platform power management.
  • wurizen - Wednesday, June 12, 2019 - link

    "Raw Memory Latency" graph shows 69ns for for 3200 and 3600 Mhz RAM. This "69ns" is irrelevant, right? Isn't the "high latency" associated with Ryzen and IF due to "Cross-CCX-Memory-Latency? This is suppose to be ~110ns at 3200 Mhz RAM as tested by PCPER/etc.... This in my experiences causes "micro-stuttering" in games like BO3/BF4/etc.... And, a "Ryzen-micro-stutter/pause" is different than a micro-stutter/pause associated with Intel. With Intel the micro-stutter/pause happens in BFV, for example, but they happen once or twice per match. With Ryzen, not only is the quality/feeling of the "micro-stutter/pause" different (seems worst), but it is constant throughout the match. One gets a feeling that it is not server-side, GPU side, nor WIndows 10 side. But, CPU-side issue... Infinity Fabric side. So, now Inifinity Fabric 2 is out. Is it 2.0 as in better? No more high latency? Is that 69ns Cross-CCX-memory latency? Why is AMD and Tech sites like Anand so... like... not talking about this?
  • igavus - Wednesday, June 12, 2019 - link

    You are misattributing things here. Your stutter is most def. not caused by memory access latency variations. For it to be visible on even an 144Hz monitor with the game running at the native rate, the differences would have to be obscenely high. That's just unrealistic.

    Not that it helps to determine what is causing your issues, but that's not it.
  • wurizen - Wednesday, June 12, 2019 - link

    What?
  • wurizen - Wednesday, June 12, 2019 - link

    Maybe, you guys don't know what Cross-CCX-Memory-Latyency is... my main goal of commenting was what that SLIDE showing "Raw Memory Latency" refers to? Is it Inter-core-memory or Intra-core-memory (intra-core is the same as cross-ccx-memory)...

    inter-core memory is data being shuffled within the cores in a CCX module. Ryzen and Ryzen + had two CCX modules with 4 cores each, totaling 8 cores for the 2700x, as an example. If, the memory/data is traveling in the same CCX, the latency is fine and is even faster than Intel. This was true with Ryzen and Ryzen +.

    The issue is when data and memory is being shuffled between the CCX modules, and when traversing the so called "Infinity Fabric." Intel uses a Ring Bus and doesn't have an equivalent measurement and data. Intel does have MESH with the x299 which is similar-ish to AMD's CCX and IF. But, Intel Mesh latency is lower (I think. But havent dug around since I dont care about it since I cant afford it)....

    So... that is what Cross-CCX-memory-latency is... and that SLIDE shown on this article... WTF does that refer to? 69ns is similar to Intel Ring Bus memory latency, which have shown to be fast enough and is the standard in regards to latency that won't cause issues...

    So... as PCPER tested, Ryzen Infinity Fanri 1.0 has a cross-ccx-latnecy to be around 110ns... and I stand my ground (its not bios/reinstallwindows/or windows scheduler/or user-error/or imperceptible/or a misunderstanding / or a mis-atribution (I think)) that it was the reason why I suffered "micro-pauses/stutters in some games. I had two systems at the time (3700k and R7-1700x) and so I was able to diagnose/observe/interpret what was happening....

    Also.. I would like to add that the "Ryzen Micro-stutter-Pause" FEELS/LOOKS/BEHAVES different... weird, right?
  • deltaFx2 - Thursday, June 13, 2019 - link

    You might "stand your ground" but that doesn't make it true. First of all, it's pretty clear you don't understand what you're talking about. Intel's Mesh is NOTHING like AMD's CCX. Intel Mesh is an alternative interconnect to ring bus; mesh scales better to many cores relative to ring. In theory mesh should be faster but for whatever reason intel's memory latency on skylake X parts are quite bad relative to skylake client (i.e. no bueno for gaming). I recall 70ns-ish for Skylake X vs 60ns for the Skylake client.

    Cross CCX memory latency should not matter unless you have shared memory across threads that span CCXs. Games don't need that many threads: 8 is overkill in many cases and each CCX can comfortably handle 8. Unless you pinned threads to cores and ran an experiment that conclusively showed that the issue was inter-ccx latency (I doubt it), your standing ground doesn't mean much. One could just as well argue that the microstutter was due to driver issues or other software/bios issues. Zen has been around for quite some time and if this was a widespread problem, we'd know.
  • wurizen - Friday, June 14, 2019 - link

    Well, I did mention "similar-ish" of Mesh to Infinity Fabric. It's meshy. And, i guess, you get "comraderie" points for calling me out as "pretty clear you don't understand what you're talking about." That hurts, man! :(

    "In theory... Mesh should be faster..." nice way to switch subjects, bruh. yeh, i can throw some at ya, bruh! what?

    Cross-CCX-High-Memory-Latency DOES MATTER!

    You know why? Because a game shuffles data randomly. It doesn't know that traversing said Data from Core 0 (residing in CCX 1) to Core 3 (in CCX 2) via Infinity Fabric means that there is a latency penalty.

    Bruh
  • deltaFx2 - Friday, June 14, 2019 - link

    Actually, no, you're wrong about the mesh. Intel has a logically unified L3 cache; i.e. any core can access any slice of the L3, or even one core can use the entire L3 for itself. AMD has a logically distributed L3 cache which means only the cores from the CCX can access its cache. You simply cannot have core 3 (CCX 0) fetch a line into CCX1's cache. The tradeoff is that the distributed L3 is much faster than the logically unified one but the logically unified one obviously offers better hit rates and does not suffer from sharing issues.

    "Cross-CCX-High-Memory-Latency DOES MATTER!" Yes it does, no question about that. It matters when you have lock contention or shared memory that spans CCXs. In order to span CCXs, you should be using more than 8 threads (4 cores to a CCX, 2 threads per core). I don't think games are _that_ multithreaded. This article mentions a Windows 10 patch to ensure that threads get assigned to the same CCX before going to the adjacent one. It can be a problem for compute-intensive applications (y'know, real work), but games? I doubt it, and you should be able to fix it easily by pinning threads to cores in the same CCX.
  • deltaFx2 - Friday, June 14, 2019 - link

    "shared memory that spans CCXs." -> shared DIRTY memory. i.e. core 8 writes data, core 0 wants to read. All other kinds of sharing are a non-issue. Each CCX gets a local copy of the data.
  • wurizen - Friday, June 14, 2019 - link

    Why do you keep on blabbing on about this? Are you trying to fix some sort of muscle?

Log in

Don't have an account? Sign up now