Windows Optimizations

One of the key points that have been a pain in the side of non-Intel processors using Windows has been the optimizations and scheduler arrangements in the operating system. We’ve seen in the past how Windows has not been kind to non-Intel microarchitecture layouts, such as AMD’s previous module design in Bulldozer, the Qualcomm hybrid CPU strategy with Windows on Snapdragon, and more recently with multi-die arrangements on Threadripper that introduce different memory latency domains into consumer computing.

Obviously AMD has a close relationship with Microsoft when it comes down to identifying a non-regular core topology with a processor, and the two companies work towards ensuring that thread and memory assignments, absent of program driven direction, attempt to make the most out of the system. With the May 10th update to Windows, some additional features have been put in place to get the most out of the upcoming Zen 2 microarchitecture and Ryzen 3000 silicon layouts.

The optimizations come on two fronts, both of which are reasonably easy to explain.

Thread Grouping

The first is thread allocation. When a processor has different ‘groups’ of CPU cores, there are different ways in which threads are allocated, all of which have pros and cons. The two extremes for thread allocation come down to thread grouping and thread expansion.

Thread grouping is where as new threads are spawned, they will be allocated onto cores directly next to cores that already have threads. This keeps the threads close together, for thread-to-thread communication, however it can create regions of high power density, especially when there are many cores on the processor but only a couple are active.

Thread expansion is where cores are placed as far away from each other as possible. In AMD’s case, this would mean a second thread spawning on a different chiplet, or a different core complex/CCX, as far away as possible. This allows the CPU to maintain high performance by not having regions of high power density, typically providing the best turbo performance across multiple threads.

The danger of thread expansion is when a program spawns two threads that end up on different sides of the CPU. In Threadripper, this could even mean that the second thread was on a part of the CPU that had a long memory latency, causing an imbalance in the potential performance between the two threads, even though the cores those threads were on would have been at the higher turbo frequency.

Because of how modern software, and in particular video games, are now spawning multiple threads rather than relying on a single thread, and those threads need to talk to each other, AMD is moving from a hybrid thread expansion technique to a thread grouping technique. This means that one CCX will fill up with threads before another CCX is even accessed. AMD believes that despite the potential for high power density within a chiplet, while the other might be inactive, is still worth it for overall performance.

For Matisse, this should afford a nice improvement for limited thread scenarios, and on the face of the technology, gaming. It will be interesting to see how much of an affect this has on the upcoming EPYC Rome CPUs or future Threadripper designs. The single benchmark AMD provided in its explanation was Rocket League at 1080p Low, which reported a +15% frame rate gain.

Clock Ramping

For any of our users familiar with our Skylake microarchitecture deep dive, you may remember that Intel introduced a new feature called Speed Shift that enabled the processor to adjust between different P-states more freely, as well as ramping from idle to load very quickly – from 100 ms to 40ms in the first version in Skylake, then down to 15 ms with Kaby Lake. It did this by handing P-state control back from the OS to the processor, which reacted based on instruction throughput and request. With Zen 2, AMD is now enabling the same feature.

AMD already has sufficiently more granularity in its frequency adjustments over Intel, allowing for 25 MHz differences rather than 100 MHz differences, however enabling a faster ramp-to-load frequency jump is going to help AMD when it comes to very burst-driven workloads, such as WebXPRT (Intel’s favorite for this sort of demonstration). According to AMD, the way that this has been implemented with Zen 2 will require BIOS updates as well as moving to the Windows May 10th update, but it will reduce frequency ramping from ~30 milliseconds on Zen to ~1-2 milliseconds on Zen 2. It should be noted that this is much faster than the numbers Intel tends to provide.

The technical name for AMD’s implementation involves CPPC2, or Collaborative Power Performance Control 2, and AMD’s metrics state that this can increase burst workloads and also application loading. AMD cites a +6% performance gain in application launch times using PCMark10’s app launch sub-test.

Hardened Security for Zen 2

Another aspect to Zen 2 is AMD’s approach to heightened security requirements of modern processors. As has been reported, a good number of the recent array of side channel exploits do not affect AMD processors, primarily because of how AMD manages its TLB buffers that have always required additional security checks before most of this became an issue. Nonetheless, for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them.

The change here comes for the Speculative Store Bypass, known as Spectre v4, which AMD now has additional hardware to work in conjunction with the OS or virtual memory managers such as hypervisors in order to control. AMD doesn’t expect any performance change from these updates. Newer issues such as Foreshadow and Zombieload do not affect AMD processors.

Performance Claims of Zen 2 New Instructions: Cache and Memory Bandwidth QoS Control
Comments Locked

216 Comments

View All Comments

  • nandnandnand - Tuesday, June 11, 2019 - link

    Shouldn't we be looking at highest transistors per square millimeter plotted over time? The Wikipedia article helpfully includes die area for most of the processors, but the graph near the top just plots number of transistors without regard to die size. If Intel's Xe hype is accurate, they will be putting out massive GPUs (1600 mm^2?) made of multiple connected dies, and AMD already does something similar with CPU chiplets.

    I know that the original Moore's law did not take into account die size, multi chip modules, etc. but to ignore that seems cheaty now. Regardless, performance is what really matters. Hopefully we see tight integration of CPU and L4 DRAM cache boosting performance within the next 2-3 years.
  • Wilco1 - Wednesday, June 12, 2019 - link

    Moore's law is about transistors on a single integrated chip. But yes density matters too, especially actual density achieved in real chips (rather than marketing slides). TSMC 7nm does 80-90 million transistors/mm^2 for A12X, Kirin 980, Snapdragon 8cx. Intel is still stuck at ~16 million transistors/mm^2.
  • FunBunny2 - Wednesday, June 12, 2019 - link

    enough about Moore, unless you can get it right. Moore said nothing about transistors. He said that compute capability was doubling about every second year. This is what he actually wrote:

    "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. "

    [the wiki]

    the main reason the Law has slowed is just physics: Xnm is little more (teehee) than propaganda for some years, at least since the end of agreed dimensions of what a 'transistor' was. couple that with the coalescing of the maths around 'the best' compute algorithms; complexity has run into the limiting factor of the maths. you can see it in these comments: gimme more ST, I don't care about cores. and so on. Mother Nature's Laws are fixed and immutable; we just don't know all of them at any given moment, but we're getting closer. in the old days, we had the saying 'doing the easy 80%'. we're well into the tough 20%.
  • extide - Monday, June 17, 2019 - link

    "The complexity for minimum component costs..."

    He was directly referring to transistor count with the word "complexity" in your quote -- so yes he was literally talking about transistor count.
  • crazy_crank - Tuesday, June 11, 2019 - link

    Actually the number of cores doesn't matter AFAIK, as Moores Law originally only was about transistor density, so all you need to compare is transistors per square millimeter. Looked at it like this, it actually doesn't even look that bad
  • chada - Wednesday, June 12, 2019 - link

    Moore's law specifically talks about density doubling. If they can fit 6 cores into the same footprint, you can absolutely consider 6 cores for a density comparison. That being said, we have been off this pace for a while.
  • III-V - Wednesday, June 12, 2019 - link

    >Moore's law specifically talks about density doubling.

    No it doesn't.

    Jesus Christ, why is Moore's Law so fucking hard for people to understand?
  • LordSojar - Thursday, June 13, 2019 - link

    Why it ever became known as a "law" is totally beyond me. More like Moore's Theory (and that's pushing it, as he made a LOT of suppositions about things he couldn't possibly predict, not being an expert in those areas. ie material sciences, quantum mechanics, etc)
  • sing_electric - Friday, June 14, 2019 - link

    This. He wasn't describing something fundamental about the way nature works - he was looking at technological advancements in one field over a short time frame. I guess 'Moore's Observation" just didn't sound as good.

    And the reason why no one seems to get it right is that Moore wrote and said several different things about it over the years - he'd OBSERVED that the number of transistors you could get on an IC was increasing at a certain rate, and from there, that this lead to performance increases, so both the density AND performance arguments have some amount of accuracy behind them.

    And almost no one points out that it's ultimately just a function of geometry: As process decreases linearly (say, 10 units to 7 units) , you get a geometric increase in the # of transistors because you get to multiply that by two dimensions. Other benefits - like decreased power use per transistor, etc. - ultimately flow largely from that as well (or they did, before we had to start using more and more exotic materials to get shrinks to work.)
  • FunBunny2 - Thursday, June 13, 2019 - link

    "Jesus Christ, why is Moore's Law so fucking hard for people to understand?"

    because, in this era of truthiness, simplistic is more fun than reality. Moore made his observation in 1965, at which time IC fabrication had not even reached LSI levels. IOW, the era when node size was dropping like a stone and frequency was rising like a Saturn rocket; performance increases with each new iteration of a device were obvious to even the most casual observer. just like prices in the housing market before the Great Recession, the simpleminded still think that both vectors will continue forevvvvaaahhh.

Log in

Don't have an account? Sign up now