Windows Optimizations

One of the key points that have been a pain in the side of non-Intel processors using Windows has been the optimizations and scheduler arrangements in the operating system. We’ve seen in the past how Windows has not been kind to non-Intel microarchitecture layouts, such as AMD’s previous module design in Bulldozer, the Qualcomm hybrid CPU strategy with Windows on Snapdragon, and more recently with multi-die arrangements on Threadripper that introduce different memory latency domains into consumer computing.

Obviously AMD has a close relationship with Microsoft when it comes down to identifying a non-regular core topology with a processor, and the two companies work towards ensuring that thread and memory assignments, absent of program driven direction, attempt to make the most out of the system. With the May 10th update to Windows, some additional features have been put in place to get the most out of the upcoming Zen 2 microarchitecture and Ryzen 3000 silicon layouts.

The optimizations come on two fronts, both of which are reasonably easy to explain.

Thread Grouping

The first is thread allocation. When a processor has different ‘groups’ of CPU cores, there are different ways in which threads are allocated, all of which have pros and cons. The two extremes for thread allocation come down to thread grouping and thread expansion.

Thread grouping is where as new threads are spawned, they will be allocated onto cores directly next to cores that already have threads. This keeps the threads close together, for thread-to-thread communication, however it can create regions of high power density, especially when there are many cores on the processor but only a couple are active.

Thread expansion is where cores are placed as far away from each other as possible. In AMD’s case, this would mean a second thread spawning on a different chiplet, or a different core complex/CCX, as far away as possible. This allows the CPU to maintain high performance by not having regions of high power density, typically providing the best turbo performance across multiple threads.

The danger of thread expansion is when a program spawns two threads that end up on different sides of the CPU. In Threadripper, this could even mean that the second thread was on a part of the CPU that had a long memory latency, causing an imbalance in the potential performance between the two threads, even though the cores those threads were on would have been at the higher turbo frequency.

Because of how modern software, and in particular video games, are now spawning multiple threads rather than relying on a single thread, and those threads need to talk to each other, AMD is moving from a hybrid thread expansion technique to a thread grouping technique. This means that one CCX will fill up with threads before another CCX is even accessed. AMD believes that despite the potential for high power density within a chiplet, while the other might be inactive, is still worth it for overall performance.

For Matisse, this should afford a nice improvement for limited thread scenarios, and on the face of the technology, gaming. It will be interesting to see how much of an affect this has on the upcoming EPYC Rome CPUs or future Threadripper designs. The single benchmark AMD provided in its explanation was Rocket League at 1080p Low, which reported a +15% frame rate gain.

Clock Ramping

For any of our users familiar with our Skylake microarchitecture deep dive, you may remember that Intel introduced a new feature called Speed Shift that enabled the processor to adjust between different P-states more freely, as well as ramping from idle to load very quickly – from 100 ms to 40ms in the first version in Skylake, then down to 15 ms with Kaby Lake. It did this by handing P-state control back from the OS to the processor, which reacted based on instruction throughput and request. With Zen 2, AMD is now enabling the same feature.

AMD already has sufficiently more granularity in its frequency adjustments over Intel, allowing for 25 MHz differences rather than 100 MHz differences, however enabling a faster ramp-to-load frequency jump is going to help AMD when it comes to very burst-driven workloads, such as WebXPRT (Intel’s favorite for this sort of demonstration). According to AMD, the way that this has been implemented with Zen 2 will require BIOS updates as well as moving to the Windows May 10th update, but it will reduce frequency ramping from ~30 milliseconds on Zen to ~1-2 milliseconds on Zen 2. It should be noted that this is much faster than the numbers Intel tends to provide.

The technical name for AMD’s implementation involves CPPC2, or Collaborative Power Performance Control 2, and AMD’s metrics state that this can increase burst workloads and also application loading. AMD cites a +6% performance gain in application launch times using PCMark10’s app launch sub-test.

Hardened Security for Zen 2

Another aspect to Zen 2 is AMD’s approach to heightened security requirements of modern processors. As has been reported, a good number of the recent array of side channel exploits do not affect AMD processors, primarily because of how AMD manages its TLB buffers that have always required additional security checks before most of this became an issue. Nonetheless, for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them.

The change here comes for the Speculative Store Bypass, known as Spectre v4, which AMD now has additional hardware to work in conjunction with the OS or virtual memory managers such as hypervisors in order to control. AMD doesn’t expect any performance change from these updates. Newer issues such as Foreshadow and Zombieload do not affect AMD processors.

Performance Claims of Zen 2 New Instructions: Cache and Memory Bandwidth QoS Control
Comments Locked

216 Comments

View All Comments

  • GreenReaper - Wednesday, June 12, 2019 - link

    The last you heard? It says clearly on page 6 that there is "single-op" AVX 256, and on page 9 explicitly that the width has been increased to 256 bits:
    https://www.anandtech.com/show/14525/amd-zen-2-mic...
    https://www.anandtech.com/show/14525/amd-zen-2-mic...

    To be honest, I don't mind how it's implemented as long as the real-world performance is there at a reasonable price and power budget. It'll be interesting to see the difference in benchmarks.
  • arashi - Wednesday, June 12, 2019 - link

    Don't expect too much cognitive abilities regarding AMD from HStewart, his pay from big blue depends on his misinformation disguised as misunderstanding.
  • Qasar - Thursday, June 13, 2019 - link

    HA ! so that explains it..... the more misinformation and misunderstanding he spreads.. the more he gets paid.......
  • HStewart - Thursday, June 13, 2019 - link

    I don't get paid for any of this - I just not extremely heavily AMD bias like a lot of people here. It just really interesting to me when Intel release information about new Ice Lake processor with 2 load / s store processor that with in a a couple days here bla bla about Zen+++. Just because 7nm does not mean they change much.

    Maybe AMD did change it 256 width - and not dual 128, they should be AVX 2 has been that way for a long time and Ice Lake is now 512. Maybe by time of Zen 4 or Zen+++++ it will be AVX 512 support.
  • Korguz - Thursday, June 13, 2019 - link

    no.. but it is known.. you are heavily intel bias..

    whats zen +++++++++ ????
    x 86-512 ??????
    but you are usually the one spreading misinformation about amd...
    " and support for single-operation AVX-256 (or AVX2). AMD has stated that there is no frequency penalty for AVX2 " " AMD has increased the execution unit width from 128-bit to 256-bit, allowing for single-cycle AVX2 calculations, rather than cracking the calculation into two instructions and two cycles. This is enhanced by giving 256-bit loads and stores, so the FMA units can be continuously fed. "
  • HStewart - Thursday, June 13, 2019 - link

    Zen+++++ was my joke as every AMD fan jokes about Intel 10+++ Just get over it

    x-86 512 - is likely not going to happen, it just to make sure people are not confusing vector processing bits with cpu bits 64 bit is what most os uses now. for last decade or so

    Intel has been using 256 AVX 2 since day one, the earlier version of AMD chips on only had two combine 128 bit - did they fix this with Zen 2 - this is of course different that AVX 512. which standard in in all Ice Lake and higher cpus and older Xeon's.
  • Qasar - Thursday, June 13, 2019 - link

    sorry HStewart... but even sone intel fans are making fun of the 14++++++ and it would be funny.. if you were making fun of the process node.. not the architeCture...
    "
    x-86 512 - is likely not going to happen, it just to make sure people are not confusing vector processing bits with cpu bits 64 bit is what most os uses now. for last decade or so " that makes NO sense...
  • HStewart - Thursday, June 13, 2019 - link

    One more thing I stay away from AMD unless there are one that bias against Intel like spreading misinformation that AVX 512 is misleading. and it really not 512 surely they do not have proof of that.

    AVX 512 is not the same as x86-512, I seriously doubt we will ever need that that but then at time people didn't think we need x86-64 - I remember original day of 8088,. no body thought we needed more 64meg AVX-512 is for vectors which is totally different.
  • just4U - Thursday, June 13, 2019 - link

    I always have a higher end Intel setup and normally a AMD setup as well.. plus I build a fair amount of setups on both. No bias here except maybe.. wanting AMD to be competitive. The news that dropped over the past month was the biggest for AMD in over a decade HS.. If you can't even acknowledge that (even grudgingly..) then geez.. I dunno.

    This has been awesome news for the industry and will put intel on their toes to do better. Be happy about it.
  • Xyler94 - Monday, June 17, 2019 - link

    HStewart, please. You don't stay away from AMD at all. You take ANY opportunity to try and make Intel look better than AMD.

    There was an article, it was Windows on ARM. You somehow managed to make a post about Intel winning over AMD. Don't spew that BS. People don't hate Intel as much as you make them out to be, they don't like you glorifying Intel.

Log in

Don't have an account? Sign up now