Creator Mode and Game Mode

*This page was updated on 8/17. A subsequent article with new information has been posted.

Due to the difference in memory latency between the two pairs of memory channels, AMD is implementing a ‘mode’ strategy for users to select depending on their workflow. The two modes are called Creator Mode (default), and Game Mode, and control two switches in order to adjust the performance of the system.

The two switches are:

  • Legacy Compatibility Mode, on or off (off by default)
  • Memory Mode: UMA vs NUMA (UMA by default)

The first switch disables the cores in one fo the silicon dies, but retains access to the DRAM channels and PCIe lanes. When the LCM switch is off, each core can handle two threads and the 16-core chip now has a total of 32 threads. When enabled, the system cuts half the cores, leaving 8 cores and 16 threads. This switch is primarily for compatibility purposes, as certain games (like DiRT) cannot work with more than 20 threads in a system. By reducing the total number of threads, these programs will be able to run. Turning the cores in one die off also alleviates some potential pressure in the core microarchitecture for cross communication.

The second switch, Memory Mode, puts the system into a unified memory architecture (UMA) or a non-unified memory architecture (NUMA) mode. Under the default setting, unified, the memory and CPU cores are seen as one massive block to the system, with maximum bandwidth and an average latency between the two. This makes it simple for code to understand, although the actual latency for a single instruction will be a good +20% faster or slower than the average, depending on which memory bank it is coming from.

NUMA still gives the system the full memory, but splits the memory and cores into into two NUMA banks depending on which pair of memory channels is nearest the core that needs the memory. The system will keep the data for a core as near to it as possible, giving the lowest latency. For a single core, that means it will fill up the memory nearest to it first at half the total bandwidth but a low latency, then the other half of the memory at the same half bandwidth at a higher latency. This mode is designed for latency sensitive workloads that rely on the lower latency removing a bottleneck in the workflow. For some code this matters, as well as some games – low latency can affect averages or 99th percentiles for game benchmarks.

The confusing thing about this switch is that AMD is calling it ‘Memory Access Mode’ in their documents, and labeling the two options as Local and Distributed. This is easier to understand than the SMT switch, in that the Local setting focuses on the latency local to the core (NUMA), and the Distributed setting focuses on the bandwidth to the core (UMA), with Distributed being default.

  • When Memory Access Mode is Local, NUMA is enabled (Latency)
  • When Memory Access Mode is Distributed, UMA is enabled (Bandwidth, default)

So with that in mind, there are four ways to arrange these two switches. AMD has given two of these configurations specific names to help users depending on how they use their system: Creator Mode is designed to give as many threads as possible and as much memory bandwidth as possible. Game Mode is designed to optimize for latency and compatibility, to drive game frame rates.

AMD Threadripper Options
  Words That Make Sense   Marketing Spiel
Ryzen
Master
Profile
Two Dies or
One Die
Memory
Mode
  Legacy
Compatibility
Mode
Memory
Access
Mode
Creator Mode Two UMA   Off Distributed
- Two NUMA   Off Local
- One UMA   On Distributed
Game Mode One NUMA   On Local

There are two ways to select these modes, although this is also a confusing element to this situation.

The way I would normally adjust these settings is through the BIOS, however the BIOS settings do not explicitly state ‘Creator Mode’ and ‘Game Mode’. They should give immediate access for the Memory Mode, where ASUS has used the Memory Access naming for Local and Distributed, not NUMA and UMA.  For the Legacy Compatibility Mode, users will have to dive several screens down into the Zen options and manually switch off eight of the cores, if the setting is going to end up being visible to the user. This makes Ryzen Master the easiest way to implement Game Mode.

While we were testing Threadripper, AMD updated Ryzen Master several times to account for the latest updates, so chances are that by the time you are reading this, things might have changed again. But the crux is that Creator Mode and Game Mode are not separate settings here either. Instead, AMD is labelling these as ‘profiles’. Users can select the Creator Mode profile or the Game Mode profile, and within those profiles, the two switches mentioned above (labelled as Legacy Compatibility Mode and Memory Access Mode) will be switched as required.

Cache Performance

As an academic exercise, Creator Mode and Game Mode make sense depending on the workflow. If you don’t need the threads and want the latency bump, Game Mode is for you. The perhaps odd thing about this is that Threadripper is aimed at highly threaded workloads more than gaming, and so losing half the threads in Game Mode might actually be a detriment to a workstation implementation.  That being said, users can leave SMT on and still change the memory access mode on its own, although AMD is really focusing more on the Creator and Game mode specifically.

For this review, we tested both Creator (default) and Game modes on the 16-core Threadripper 1950X. As an academic exercise we looked into memory latency in both modes, as well as at higher DRAM frequencies. These latency numbers take the results for the core selected (we chose core 2 in each case) and then stride through to hit L1, L2, L3 and main memory. For UMA systems like in Creator Mode, main memory will be an average between the near and far memory results. We’ve also added in here a Ryzen 5 1600X as an example of a single Zeppelin die, and a 6950X Broadwell for comparison. All CPUs were run at DDR4-2400, which is the maximum supported at two DIMMs per channel. 

For the 1950X in the two modes, the results are essentially equal until we hit 8MB, which is the L3 cache limit per CCX. After this, the core bounces out to main memory, where the Game mode sits around 79ns while the Creator mode is at 108 ns. By comparison the Ryzen 5 1600X seems to have a lower latency at 8MB (20ns vs 41 ns), and then sits between the Creator and Game modes at 87 ns. It would appear that the bigger downside of Creator mode in this way is the fact that main memory accesses are much slower than normal Ryzen or in Game mode.

If we crank up the DRAM frequency to DDR4-3200 for the Threadripper 1950X, the numbers change a fair bit:


Click for larger image

Up until the 8MB boundary where L3 hits main memory, everything is pretty much equal. At 8MB however, the latency at DDR4-2400 is 41ns compared to 18ns at DDR4-3200. Then out into full main memory sees a pattern: Creator mode at DDR4-3200 is close to Game Mode at DDR4-2400 (87ns vs 79ns), but taking Game mode to DDR4-3200 drops the latency down to 65ns.

Another element we tested while in Game Mode was the latency for near memory and far memory as seen from a single core. Remember this slide from AMD’s deck?

In our testing, we achieved the following:

  • At DDR4-2400, 79ns near memory and 136ns far memory (108ns average)
  • At DDR4-3200, 65ns near memory and 108ns far memory (87ns average)

Those average numbers are what we get for Creator mode by default, indicating that the UMA mode in Creator mode will just use memory at random between the two.

Silicon, Glue, & NUMA Too Test Bed and Setup
Comments Locked

347 Comments

View All Comments

  • drajitshnew - Thursday, August 10, 2017 - link

    You have written that "This socket is identical (but not interchangeable) to the SP3 socket used for EPYC,".
    Please, clarify.
    I was under the impression that if you drop an epyc in a threadripper board, it would disable 4 memory channels & 64 PCIe lanes as those will simply not be wired up.
  • Deshi! - Friday, August 11, 2017 - link

    No AMD have stated that won;t work. Its probably not hardware incompatible, but they probably put microcode on the CPUS so that if it doesn;t detect its a Ryzen CPU it doesn't work. There might also be differences in how the cores are wired up on the fabric since its 2 cores instead of 4. Remember, Threadripper has only 2 Physical Dies that are active. on Epyc all processors are 4 dies with cores on each die disabled right down to the 8 core part. (2 enabled on each physical die)
  • Deshi! - Friday, August 11, 2017 - link

    Wish there was an edit function..... but to add to that, If you pop in an Epyc processor, it might go looking for those extra lanes and memory busses that don;t exist on Threadripper boards, hence cause it not to function.
  • pinellaspete - Thursday, August 10, 2017 - link

    This is the second article where you've tried to start an acronym called SHED (Super High End Desktop) in referring to AMD Threadripper systems. You also say that Intel systems are HEDT (High End Desktop) when in all reality both AMD and Intel are HEDT. It is just that Intel has been keeping the core count low on consumer systems for so long you think that anything over a 10 core system is unusual.

    AMD is actually producing a HEDT CPU for $1000 and not inflating the price of a HEDT CPU and bleeding their customers like Intel was doing with the i7-6950X CPU for $1750. HEDT CPUs should cost about $1000 and performance should increase with every generation for the same price, not relentlessly jacking the price as Intel has done.

    HEDT should be increasing in performance every generation and you prove yourself to be Intel biased when something finally comes along that beats Intel's butt. Just because it beats Intel you want to put it into a different category so it doesn't look like Intel fares as bad. If we start a new category of computers called SHED what comes next in a few years? SDHED? Super Duper High End Desktop?
  • Deshi! - Friday, August 11, 2017 - link

    theres a good reason for that. Intel is not just inflating the cost because they want to. It literally cost them much more to produce their chips because of the monolithic die aproach vs AMDs Modular aproach. AMDs yeilds are much better than INtels in the higher core counts. Intel will not be able to match AMDs prices and still make significant profit unless they also adopt the same approach.
  • fanofanand - Tuesday, August 15, 2017 - link

    "HEDT CPUs should cost about $1000 "

    That's not how free markets work. Companies will price any given product at their maximum profit. If they can sell 10 @ $2000 or 100 at $1000 and it costs them $500 to produce, they would make $15,000 selling 10 and $50,000 selling 100 of them. Intel isn't filled with idiots, they priced their chips at whatever they thought would bring the maximum profits. The best way for the consumer to protest prices that we believe are higher than the "right" price is to not buy them. The companies will be forced to reduce their prices to find the market equilibrium. Stop complaining about Intel's gouging, vote with your wallet and buy AMD. Or don't, it's up to you.
  • Stiggy930 - Thursday, August 10, 2017 - link

    Honestly, the review is somewhat disappointing. For a pro-sumer product, there is no MySQL/PostgreSQL benchmark. No compilation test under Linux environment. Really?
  • name99 - Friday, August 11, 2017 - link

    "In an ideal world, all software would be NUMA-aware, eliminating any concerns over the matter."

    Why? This is an idiotic statement, like saying that in an ideal world all software would be aware of cache topology. In an actual ideal world, the OS would handle page or task migration between NUMA nodes transparently enough that almost no app would even notice NUMA, and even in an non-ideal world, how much does it actually matter?
    Given the way the tech world tends to work ("OMG, by using DRAM that's overclocked by 300MHz you can increase your Cinebench score by .5% !!! This is the most important fact in the history of the universe!!!") my suspicion, until proven otherwise, is that the amount of software for which this actually matters is pretty much negligible and it's not worth worrying about.
  • cheshirster - Friday, August 11, 2017 - link

    Anandtechs power and compiling tests are completely out of other rewiewers results.
    Still hiding poor Skylake-X gaming results.
    Most of the tests are completely out of that 16-core CPU target workloads.
    2400 memory used for tests.
    Absolutely zero perf/watt and price/perf analisys.

    Intel bias is over the roof here.
    Looks like I'm done with Anandtech.
  • Hurr Durr - Friday, August 11, 2017 - link

    Here`s your pity comment.

Log in

Don't have an account? Sign up now