Conclusions

In this mini-test, we compared AMD’s Game Mode as originally envisioned by AMD. Game Mode sits as an extra option in the AMD Ryzen Master software, compared to Creator Mode which is enabled by default. Game Mode does two things: firstly, it adjusts the memory configuration. Rather than seeing the DRAM as one uniform block of memory with an ‘average’ latency, the system splits the memory into near memory closest to the active CPU, and far memory for DRAM connected via the other silicon die. The second thing that Game Mode does is disable the cores on one of the silicon dies, but retains the PCIe lanes, IO, and DRAM support. This disables cross-die thread migration, offers faster memory for applications that need it, and aims to lower the latency of the cores used for gaming by simplifying the layout. The downside of Game Mode is raw performance when peak CPU is needed: by disabling half the cores, any throughput limited task is going to be cut by losing half of the throughput resources. The argument here is that Game mode is designed for games, which rarely use above 8 cores, while optimizing the memory latency and PCIe connectivity.

A simpler way to imagine Game Mode is this: enabling Game Mode brings the top tier Threadripper 1950X down to the level of a Ryzen 7 processor for core count at around the same frequency, but still gets the benefits of quad channel memory and all 60 PCIe lanes for add-in cards. In this mode, the CPU will preferentially use the lower latency memory available first, attempting to ensure a better immediate experience. You end up with an uber-Ryzen 7 for connectivity.


AMD states that a Threadripper in Game Mode will have lower latency than a Ryzen 7, as well as a higher boost and larger boost window (up to 4 cores rather than 2)

In our testing, we did the full gamut of CPU and CPU Gaming tests, at 1080p and 4K with Game Mode enabled.

On the CPU results, they were perhaps to be expected: single threaded tests with Game Mode enabled performed similar to Ryzen 7 and the 1950X, but multithreaded tests were almost halved to the 1950X, and slightly slower than the Ryzen 7 1800X due to the lower all-core turbo.

The CPU gaming tests were instead a mixed bunch. Any performance difference from Game Mode over Creator Mode was highly dependant on the game, on the graphics card, and on the resolution. Overall, the results could be placed into buckets:

  • Noted minor losses in Civilization 6, Ashes of the Singularity and Shadow of Mordor
  • Minor loss to minor gain on GTX 1080 and GTX 1060 overall in all games
  • Minor gain for AMD cards on Average Frame Rates, particularly RoTR and GTA
  • Sizeable (10-25%) gain for AMD cards on 99th Percentile Frame Rates, particularly RoTR and GTA
  • Gains are more noticable for 1080p gaming than 4K gaming
  • Most gains across the board are on 99th Percentile data

Which leads to the following conclusions

  • No real benefit on GTX 1080 or GTX 1060, stay in Creator Mode
  • Benefits for Rise of the Tomb Raider, Rocket League and GTA
  • Benefit more at 1080p, but still gains at 4K

The pros and cons of enabling Game Mode are meant to be along the lines of faster and lower latency gaming, at the expense of raw compute power. The fact that it requires a reboot to switch between Creator Mode and Game Mode is a main detractor - if it were a simple in-OS switch, then it could be enabled for specific titles on specific graphics cards just before the game is launched. This will not ever be possible, due to how PCs decide what resources are available when. That being said however, perhaps AMD has missed a trick here.

Could AMD have Implemented Game Mode Differently?

By virtue of misinterpreting AMD's slide deck, and testing a bunch of data with SMT disabled instead, we have an interesting avenue as to how users might do something akin to Game Mode but not specifically AMD's game mode. This also leads to the question if AMD implemented and labeled the Game Mode environment in the right way.

By enabling NUMA and disabling SMT, the 16C/32T chip moves down to 16C/16T. It still has 16 full cores, but has to deal with communication across the two eight-core silicon dies. Nonetheless it still satisfies the need for cores to access the lowest latency memory near to that specific core, as well as enabling certain games that need fewer total threads to actually work. It should, by the description alone, enable the 'legacy' part of legacy gaming.

The underlying performance analysis between the two modes becomes this:

When in 16C/16T mode, performance in CPU benchmarks was higher than in 8C/16T mode.
When in 16C/16T mode, performance in CPU gaming benchmarks was higher than in 8C/16T mode. 

Some of the suggestions from comparing AMD's defined 8C/16T Game Mode for CPU gaming actually change when in 16C/16T mode: games that saw slight regressions with 8 cores became neutral at 16C or even had slight improvements, particularly at 1080p.

One of the main detractors to the 8C/16T mode is that it requires a full restart in order to enable it. Disabling SMT could theoretically be done at the OS level before certain games come in to play. If the OS is able to determine which core IDs are associated to standard threads and which ones would be hyperthreads, it is perhaps possible for the OS just to refuse to dispatch threads in flight to the hyperthreads, allowing only one thread per core. (There's a small matter of statically shared resources to deal with as well.) The mobile world deals with thread migration between fast cores and slow cores every day, and some cores can be hotplug disabled on the fly. One could postulate that Windows could do something similar with the equivalent of hyperthreads.

Would this issue need to be solved by Windows, or by AMD? I suspect a combination of both, really. 

Update:

Robert Hallock on AMD's Threadripper webcast has stated that Windows Scheduler is not capable of specifically zeroing out a full die to have the same effect. The UMA/NUMA implementation can be managed with Windows Scheduler to assign threads to where the data is (or assign data to where the threads are), but as far as fully disabling a die in the OS requires a restart.

 

Related Reading

Analyzing Creator Mode and Game Mode
Comments Locked

104 Comments

View All Comments

  • Gigaplex - Thursday, August 17, 2017 - link

    How is Windows supposed to know when a specific app will run better with SMT enabled/disabled, NUMA, or even settings like SLI/Crossfire and PCIe lane distribution between peripheral cards? If your answer is app profiles based on benchmark testing, there's no way Microsoft will do that for all the near infinite configurations of hardware against all the Windows software out there. They've cut back on their own testing and fired most of their testing team. It's mostly customer beta testing instead.
  • peevee - Friday, August 18, 2017 - link

    Windows does not know whether it is a critical gaming thread or not. Setting thread affinity is not a rocket science - unless you are some Java "programmer".
  • Spoelie - Friday, August 18, 2017 - link

    And anyone not writing directly in assembly should be shot on sight, right?
  • peevee - Friday, August 18, 2017 - link

    You don't need to write in assembly to set thread affinities.
  • Glock24 - Thursday, August 17, 2017 - link

    Seems like the 1800X is a better all around CPU. If you really need and can use more than 8C/16T then get TR.

    For mixed workloads of gaming and productivity the 1800X or any of the smaller siblings is a better choice.
  • msroadkill612 - Friday, August 18, 2017 - link

    The decision watershed is pcie3 lanes imo. Otherwise, the ryzen is a mighty advance in the mainstream sweet spot over ~6 months ago.

    OTH, I see lane hungry nvme ports as a boon to expanding a pcs abilities later. The premium for an 8 core TR & mobo over ryzen seems cost justifiable expandability.
  • Luckz - Friday, August 18, 2017 - link

    It seems that the 1800X has the NVIDIA spend less time doing weird stuff.
  • franzeal - Thursday, August 17, 2017 - link

    If you're going to reference Intel in your benchmark summaries (Rocket League is one place I noticed it), either include them or don't forget to edit them out of your copy/paste job.
  • Luckz - Friday, August 18, 2017 - link

    WCCFTech-Level writing, eh.
  • franzeal - Thursday, August 17, 2017 - link

    Again, as with the original article, the description for the Dolphin render benchmarks is incorrectly stating that the results are shown in minutes.

Log in

Don't have an account? Sign up now