Scaled Gaming Performance: High Resolution

Civilization 6

Originally penned by Sid Meier and his team, the Civilization series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but I have played every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, and it is a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

GTX 1080: Civilization VI, Average FPSGTX 1080: Civilization VI, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Performance in Civ VI shows there is very little benefit to be had by going from DDR5-4800 to DDR5-6400. The results also show that Civ VI actually benefits from lower latencies, with DDR5-4800 at CL32 outperforming all the other frequencies tested at CL36.

Shadow of the Tomb Raider (DX12)

The latest installment of the Tomb Raider franchise does less rising and lurks more in the shadows with Shadow of the Tomb Raider. As expected this action-adventure follows Lara Croft which is the main protagonist of the franchise as she muscles through the Mesoamerican and South American regions looking to stop a Mayan apocalyptic she herself unleashed. Shadow of the Tomb Raider is the direct sequel to the previous Rise of the Tomb Raider and was developed by Eidos Montreal and Crystal Dynamics and was published by Square Enix which hit shelves across multiple platforms in September 2018. This title effectively closes the Lara Croft Origins story and has received critical acclaims upon its release.

The integrated Shadow of the Tomb Raider benchmark is similar to that of the previous game Rise of the Tomb Raider, which we have used in our previous benchmarking suite. The newer Shadow of the Tomb Raider uses DirectX 11 and 12, with this particular title being touted as having one of the best implementations of DirectX 12 of any game released so far.

GTX 1080: Shadow of the Tomb Raider, Average FPSGTX 1080: Shadow of the Tomb Raider, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Looking at our results in Shadow of the Tomb Raider, we did see some improvements in performance scaling from DDR5-4800 to DDR5-6400. The biggest improvement came when testing DDR4-4800 CL32, which performed similarly to DDR5-6000 CL36.

Strange Brigade (DX12)

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arisen once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative-centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations. For our testing, we use the DirectX 12 benchmark.

GTX 1080: Strange Brigade DX12, Average FPSGTX 1080: Strange Brigade DX12, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Performance in Strange Brigade wasn't influenced by the frequency and latency of the G.Skill Trident Z5 DDR5 memory in terms of average frame rates. We do note however that 95th percentile performance does, for the most part, improve as we increased the memory frequency.

Gaming Performance: Low Resolution DDR5 Memory Scaling on Alder Lake Conclusion
Comments Locked

82 Comments

View All Comments

  • bananaforscale - Thursday, December 30, 2021 - link

    Command rate of 14? Shirley you mean latency of 14.
  • Oxford Guy - Saturday, January 1, 2022 - link

    Yes. Posting with the flu leads to mistakes.
  • Kvaern1 - Sunday, January 9, 2022 - link

    I wouldn't call DDR4 3200/3600 expensive highend RAM. 3200 wasn't even expensive when I put it in my old Skylake in early 2016. DDR5 OTOH currently cost about twice as much as DDR4 for basically no gains outside of massively parallel computing realms.
  • Oxford Guy - Wednesday, January 12, 2022 - link

    It depends upon whether it’s top-grade B die or not. 3600–14 can be quite pricy.
  • throAU - Thursday, December 30, 2021 - link

    uh.... why benchmarking game performance at 4k with a GTX1080 from ... 2016?

    surely to benchmark CPU vs. memory performance with DDR5 you'd want a relevant GPU to pair with it? one that isn't being strangled at 4k, etc.
  • throAU - Thursday, December 30, 2021 - link

    Saw an earlier comment from Ian - No GPUs available.

    Well then, I guess don't run GPU limited benchmarks as part of the memory scaling analysis. If you can't run the numbers legitimately, then don't run them. Have to say this is just not up to the usual anandtech high standard we've come to expect over the past 2 decades.
  • Oxford Guy - Sunday, January 2, 2022 - link

    AMD and Nvidia are selling plenty of video cards for high prices that have less performance than a 1080. Cards with less performance than a 1080 and cards with equivalent performance continue to be brought to market.

    People have been conditioned to assume the only valid tests involve the most expensive cutting-edge consumer hardware because that hardware is typically provided by companies to try to get sales. I remember when this place had chart data points featuring triple SLI GTX setups with the top Nvidia cards of time. Who could afford that and who could tolerate the noise and heat? Other sites have routinely done GPU tests with Intel CPUs that were really expensive. They say it's to eliminate bottlenecks but when the only data involves luxury hardware it can be less informative to the majority of buyers who aren't going to spend that kind of money. The same goes for using a video card like a Titan instead of one at a reasonable price point. Not only were cards in that line overpriced, they had a short market lifespan as I recall. Buying one was more about bragging rights than value for one's money.

    If the video card being used were a GTX 580 or something else that's totally irrelevant to contemporary gaming then you could call the data illegitimate legitimately. What it actually is is legitimate data that's not as complete as you'd like. I would particularly like to see DDR-4 performance in any article's charts about DDR-5's performance. Not having that doesn't make the data invalid, just less convenient.

    Many would be happy to have 1080-level performance given the current situation. Extremetech was actually recommending that people look at the ancient 290X due to the GPU pricing situation.

    Vega cards, which weren't so impressive (especially in performance-per-watt but also in performance-per-decibel) when they were new are now in their second round of mining-induced gouging. The pricing for those used is preposterous and yet that's the situation we're in.

    Unless you have quite a bit of disposable income for gaming you're unlikely to have a 3080/3090 now or in the near future. It may be that testing with a 3090 would be more irrelevant than with a 1080 simply due to the smallness of the percentage of those who will own a card with that performance in the near future and present.
  • TheinsanegamerN - Monday, January 3, 2022 - link

    "People have been conditioned to assume the only valid tests involve the most expensive cutting-edge consumer hardware"

    People are smart enough to use their brains, and realize that to test scaling of a component you must remove every possible bottleneck not related to said component. running a GPU limited test on a memory scaling benchmark is utterly pointless.

    "AMD and Nvidia are selling plenty of video cards for high prices that have less performance than a 1080"

    Intel sells lots of CPUs that are slower then a 12900k. Shoudl they have used a pentium G6400 for these tests? How about 2133 mhz DDR4?

    See, sales nubers are not relevant to scaling tests. We are concerned with how well a certian part scales, not how well it sells. Why is this so hard for people to understand?
  • Oxford Guy - Tuesday, January 4, 2022 - link

    'See, sales nubers are not relevant to scaling tests.'

    If the tests don't match the product usage the tests aren't relevant.
  • Tom Sunday - Tuesday, January 4, 2022 - link

    All said and done I am still running with DDR3 memory on my used and hobbled together DELL XPS 730X from 2008. If I had the money now and having a meaningful job, I would most certainly buy DDR5 and an Alder Lake set-up to be happy

Log in

Don't have an account? Sign up now