Gaming Performance

Civilization 6 (DX12)

Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Perhaps a more poignant benchmark would be during the late game, when in the older versions of Civilization it could take 20 minutes to cycle around the AI players before the human regained control. The new version of Civilization has an integrated ‘AI Benchmark’, although it is not currently part of our benchmark portfolio yet, due to technical reasons which we are trying to solve. Instead, we run the graphics test, which provides an example of a mid-game setup at our settings.

RX 5700 XT: Civilization 6 - Average FPSRX 5700 XT: Civilization 6 - 99th Percentile

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

RX 5700 XT: Grand Theft Auto V - Average FPSRX 5700 XT: Grand Theft Auto V - 99th Percentile

F1 2018

Aside from keeping up-to-date on the Formula One world, F1 2017 added HDR support, which F1 2018 has maintained; otherwise, we should see any newer versions of Codemasters' EGO engine find its way into F1. Graphically demanding in its own right, F1 2018 keeps a useful racing-type graphics workload in our benchmarks.

Aside from keeping up-to-date on the Formula One world, F1 2017 added HDR support, which F1 2018 has maintained. We use the in-game benchmark, set to run on the Montreal track in the wet, driving as Lewis Hamilton from last place on the grid. Data is taken over a one-lap race.

RX 5700 XT: F1 2018 - Average FPSRX 5700 XT: F1 2018 - 99th Percentile

For our discrete gaming tests, we saw very little difference between all three speed settings. This is because at the end of the day, the resolutions that people who buy this kit are likely to play at aren't going to be memory bound - it's almost always GPU bound. What we really need is an APU here to see some differences.

CPU Performance, Short Form Corsair DDR4-5000 Vengeance LPX Conclusion
Comments Locked

54 Comments

View All Comments

  • JoeyJoJo123 - Tuesday, January 28, 2020 - link

    Yeah, I'd like to see Anandtech test extremely tight timings on this kit at 3600mhz.

    However, review time is limited and expensive for the company, and I know firsthand that tweaking one timing down by 1, sitting through 2hr of memtest86, going back into bios, tweaking it down again, etc... Led to 2 ~ 3 weeks of recreational memory tuning before I dialed in the tightest possible settings for my CPU + MoBo + Memory kit that didn't result in errors in memtest86 for all 4 passes.
  • JoeyJoJo123 - Tuesday, January 28, 2020 - link

    Also, I had dual-rank ram, and the ryzen memory timing calculator wasn't giving me stable timings (and yes, I did follow the correct process, it's just the utility likely isn't optimized for fairly rare dual-rank 16GBx2 memory kits), so I didn't have a good starting point for most timings to start tweaking down further than the already sparsely populated XMP profile that doesn't list a number for the majority of the smaller subtimings.
  • Ratman6161 - Tuesday, January 28, 2020 - link

    I don't think dual rank 16x2 memory kits are "fairly rare". In fact I've been running them for several years for a variety of purposes. Also, the x570 motherboards are claiming support for 128 GB RAM which would require 32x4 (havent seen any 64 GB desktop modules yet). NewEgg has 15 32x4 kits. If you are springing for a 3950X you may as well go all the way and max out the RAM too. :)
  • JoeyJoJo123 - Wednesday, January 29, 2020 - link

    I went for a 3700x. Also, take a look here:

    https://www.reddit.com/r/Amd/comments/cw7ysm/32gb_...

    But no, the Ryzen Calc (at least at the time) wasn't providing me known good timings. I didn't even have Samsung B-Die, just Hynix CJR dual-rank memory. See the image here: https://i.imgur.com/iEb4Ctj.png
    The Ryzen calc was saying that a tCAS Latency of 14 at 3600mhz was stable. Absolutely not. Anything lower than 16 either wouldn't post or if it did post would reset itself back to JEDEC default timings or even more rarely, boot into memtest86 and spit out hundreds of errors. Same the withe tRFC values.

    Because of the bad tCL and tRFC values (which I didn't know those specific timings were the ones that were causing me issues), the Ryzen calc's values weren't useful as a starting point to tuning my memory. I had to start from the 16-19-19-39 XMP values, and unfilled out subtimings to try to get where I got to in the end, and that takes a long-ass time of memtest86 reboots and saving BIOS profiles to quickly get back to the last working memory tuning profile, along with keeping an excel spreadsheet on another computer marking the last working setting for that subtiming and the last not-working subtiming value and what error I saw when I attempted to set that.

    And yes, I even did multiple passes through the dozens of individual timings to see if i could tweak down previously tweaked timings lower after other timings were tweaked. My last pass through (before that image) was unsuccessful in lowering any single timing down by even 1 ns lower. So this is absolutely as low as it goes without going even more overkill on DRAM/SoC voltages, which were already overvolted.
  • integralfx - Tuesday, January 28, 2020 - link

    If we assume 200MHz per tick of tCL, tRCD and tRP (worst case scenario), then at 3600MHz these sticks could do 11-19-19 at 1.50v. The high tRCD at 5000MHz is an IMC limitation so it could definitely go lower at more realistic frequencies.
  • ses1984 - Wednesday, January 29, 2020 - link

    Tighter timings lower mhz performs about the same as looser timings higher mhz. It's barely worth benchmarking, especially when there was such little variation in all the benchmarks.
  • JlHADJOE - Thursday, January 30, 2020 - link

    5000MHz @ CAS 18 = 3600MHz @ CAS 13
  • bug77 - Monday, January 27, 2020 - link

    It would be so cool seeing one cheaping out on a dGPU, only to spend $1,200 on RAM to make the iGPU faster.
  • Hectandan - Monday, January 27, 2020 - link

    What if they are assembling a sub-3L desktop? Can't fit a GPU there
  • TheinsanegamerN - Monday, January 27, 2020 - link

    In such a small machine, you wouldnt be able to use any iGPU that could use 5000mhz RAM without overheating either. What's your point?

Log in

Don't have an account? Sign up now