Section by Dr. Ian Cutress (Orignal article)

Windows Optimizations

One of the key points that have been a pain in the side of non-Intel processors using Windows has been the optimizations and scheduler arrangements in the operating system. We’ve seen in the past how Windows has not been kind to non-Intel microarchitecture layouts, such as AMD’s previous module design in Bulldozer, the Qualcomm hybrid CPU strategy with Windows on Snapdragon, and more recently with multi-die arrangements on Threadripper that introduce different memory latency domains into consumer computing.

Obviously AMD has a close relationship with Microsoft when it comes down to identifying a non-regular core topology with a processor, and the two companies work towards ensuring that thread and memory assignments, absent of program driven direction, attempt to make the most out of the system. With the May 10th update to Windows, some additional features have been put in place to get the most out of the upcoming Zen 2 microarchitecture and Ryzen 3000 silicon layouts.

The optimizations come on two fronts, both of which are reasonably easy to explain.

Thread Grouping

The first is thread allocation. When a processor has different ‘groups’ of CPU cores, there are different ways in which threads are allocated, all of which have pros and cons. The two extremes for thread allocation come down to thread grouping and thread expansion.

Thread grouping is where as new threads are spawned, they will be allocated onto cores directly next to cores that already have threads. This keeps the threads close together, for thread-to-thread communication, however it can create regions of high power density, especially when there are many cores on the processor but only a couple are active.

Thread expansion is where cores are placed as far away from each other as possible. In AMD’s case, this would mean a second thread spawning on a different chiplet, or a different core complex/CCX, as far away as possible. This allows the CPU to maintain high performance by not having regions of high power density, typically providing the best turbo performance across multiple threads.

The danger of thread expansion is when a program spawns two threads that end up on different sides of the CPU. In Threadripper, this could even mean that the second thread was on a part of the CPU that had a long memory latency, causing an imbalance in the potential performance between the two threads, even though the cores those threads were on would have been at the higher turbo frequency.

Because of how modern software, and in particular video games, are now spawning multiple threads rather than relying on a single thread, and those threads need to talk to each other, AMD is moving from a hybrid thread expansion technique to a thread grouping technique. This means that one CCX will fill up with threads before another CCX is even accessed. AMD believes that despite the potential for high power density within a chiplet, while the other might be inactive, is still worth it for overall performance.

For Matisse, this should afford a nice improvement for limited thread scenarios, and on the face of the technology, gaming. It will be interesting to see how much of an affect this has on the upcoming EPYC Rome CPUs or future Threadripper designs. The single benchmark AMD provided in its explanation was Rocket League at 1080p Low, which reported a +15% frame rate gain.

Clock Ramping

For any of our users familiar with our Skylake microarchitecture deep dive, you may remember that Intel introduced a new feature called Speed Shift that enabled the processor to adjust between different P-states more freely, as well as ramping from idle to load very quickly – from 100 ms to 40ms in the first version in Skylake, then down to 15 ms with Kaby Lake. It did this by handing P-state control back from the OS to the processor, which reacted based on instruction throughput and request. With Zen 2, AMD is now enabling the same feature.

AMD already has sufficiently more granularity in its frequency adjustments over Intel, allowing for 25 MHz differences rather than 100 MHz differences, however enabling a faster ramp-to-load frequency jump is going to help AMD when it comes to very burst-driven workloads, such as WebXPRT (Intel’s favorite for this sort of demonstration). According to AMD, the way that this has been implemented with Zen 2 will require BIOS updates as well as moving to the Windows May 10th update, but it will reduce frequency ramping from ~30 milliseconds on Zen to ~1-2 milliseconds on Zen 2. It should be noted that this is much faster than the numbers Intel tends to provide.

The technical name for AMD’s implementation involves CPPC2, or Collaborative Power Performance Control 2, and AMD’s metrics state that this can increase burst workloads and also application loading. AMD cites a +6% performance gain in application launch times using PCMark10’s app launch sub-test.

Hardened Security for Zen 2

Another aspect to Zen 2 is AMD’s approach to heightened security requirements of modern processors. As has been reported, a good number of the recent array of side channel exploits do not affect AMD processors, primarily because of how AMD manages its TLB buffers that have always required additional security checks before most of this became an issue. Nonetheless, for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them.

The change here comes for the Speculative Store Bypass, known as Spectre v4, which AMD now has additional hardware to work in conjunction with the OS or virtual memory managers such as hypervisors in order to control. AMD doesn’t expect any performance change from these updates. Newer issues such as Foreshadow and Zombieload do not affect AMD processors.

X570 Motherboards: PCIe 4.0 For Everybody Test Bed and Setup
POST A COMMENT

452 Comments

View All Comments

  • Death666Angel - Tuesday, July 9, 2019 - link

    Well, the thing is that motherboard manufacturers, motherboard revisions, motherboard layout and BIOS versions do play a role as well, though. The memory controller is just one piece of the puzzle. If you have a CPU with a great memory controller, it doesn't mean it performs the same on all boards. And it doesn't mean it performs the same with all RAM either. Sometimes the actual traces on motherboards are crap for certain clockspeeds. Sometimes the BIOS numbers for secondary and tertiary timings are crap at certain clockspeeds and get better in later revisions, seemingly allowing for better memory clockspeeds when it really was just a question of auto vs manual if you knew what you were doing. Sometimes the SoC voltage is worse on that board vs the other and that influences things. The thing is, across the board, X570 motherboards have higher advertised OC clockspeeds for the memory and Ryzen 3000 has higher guaranteed clockspeeds. And Anandtech believes that is the thing that counts, not if you can get x clockspeed stable. At least in the vanilla CPU articles. They do separate RAM articles often. Reply
  • wellington759 - Tuesday, August 13, 2019 - link

    Thank you for sharing this amazing idea, really appreciates your post.
    http://thestoreguide.co.nz/
    Reply
  • BLu3HaZe - Tuesday, July 9, 2019 - link

    "Some motherboard vendors are advertising speeds of up to DDR4-4400 which until Zen 2, was unheard of. Zen 2 also marks a jump up to DDR4-3200 up from DDR4-2933 on Zen+, and DDR4-2667 on Zen."

    How about now? :)

    And I believe the authors mean to say that official support for is up to 3200 on X570 boards, while older boards were rated lower "officially" corresponding to the generation they launched with. Speeds above that would be listed with (OC) clearly marked in memory support.
    Anything above the 'rated' speeds, you're technically overclocking the Infinity Fabric until you run in 2:1 mode which is only on Zen 2 anyhow, so your mileage will definitely vary.

    Even the 9900K 'officially' supports only DDR4-2666 but we all know how high it can go without any issues combined with XMP OC.
    Reply
  • Ratman6161 - Wednesday, July 10, 2019 - link

    In Zen and Zen +, the infinity fabric speed was tied to the memory speed. So overclock the RAM and you were also overclocking the infinity fabric. In Zen 2 infinity fabric is independent of the RAM speed. Reply
  • Targon - Monday, July 8, 2019 - link

    I am curious about the DDR4-3200 CL16 memory in the Ryzen test. CL16 RAM is considered the "cheap crap" when it comes to DDR4-3200, and my own Hynix M-die garbage memory is exactly that, G.skill Ripjaws V 3200CL16. On first generation Ryzen, getting it to 3200 speeds just hasn't happened, and I know that for gaming, CL16 vs. CL14 is enough to cause the slight loss to Intel(meaning Intel wouldn't have the lead in the gaming tests). Reply
  • Ninjawithagun - Monday, July 8, 2019 - link

    Regardless of whether or not a 'crap' DRAM kit having CL16 vs. a much more expensive kit with lower CL rating, it isn't going to make any significant difference in performance. This has been proven again and again. Reply
  • Ratman6161 - Wednesday, July 10, 2019 - link

    "CL16 RAM is considered the "cheap crap" when it comes to DDR4-3200"

    Since when? Yes its cheap(er) but I'd disagree with the "crap" part. I needed 32 Gb of RAM so that's either 2x16 with 16 GB modules usually being double sided (a crap shoot) or 4x8 with 4 modules being a crap shoot. Looking at current pricing (not the much higher prices from back when I bought) New egg has the G-skill ripjaws 2x16 CAS 16 kit for $135 while the Trident Z 2x16 CAS 15 for $210 or the CAS 14 Trident Z for $250. So I'd be paying $75 to $115 more...for something that isn't likely to do any better in my real world configuration. Even if I could hit its advertised CAS 15 or 14, how much is that worth. So I'd say the RipJaws is not "cheap crap". Its a "value" :)
    Reply
  • Domaldel - Wednesday, July 10, 2019 - link

    It's considered "cheap crap" because you can't guarantee that it's Samsung B-die at those speeds while you can with DDR4 3200 MHz CL14 as nothing else is able to reach those speeds and latencies then a good B-die.
    What that means is that you can actually have a shot at manually overclocking it further while keeping compatibility with Ryzen (if you tweak the timings and sub-timings) while you couldn't really with other memory kids on the first two generations of Ryzen.
    I don't have a Ryzen 3xxx series of chip so I can't really comment on those...
    Reply
  • WaltC - Monday, July 15, 2019 - link

    Since about the 2nd AGESA implementation, on my original x370 Ryzen 1 mboard, my "cheap crap"...;)...Patriot Viper Elite 16CL 2x8GB has had no problem with 3200Mhz at stock timings. used the same on a x47- mboard, and now it's running at 3200MHz on my x570 Aorus Master board--no problems. Reply
  • jgraham11 - Tuesday, July 16, 2019 - link

    DDR4 3200 is apparently not an overclock. Says so on AMD's specs page for the 3700X

    https://www.amd.com/en/products/cpu/amd-ryzen-7-37...
    Reply

Log in

Don't have an account? Sign up now