Civilization 6

First up in our CPU gaming tests is Civilization 6. Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Perhaps a more poignant benchmark would be during the late game, when in the older versions of Civilization it could take 20 minutes to cycle around the AI players before the human regained control. The new version of Civilization has an integrated ‘AI Benchmark’, although it is not currently part of our benchmark portfolio yet, due to technical reasons which we are trying to solve. Instead, we run the graphics test, which provides an example of a mid-game setup at our settings.

At both 1920x1080 and 4K resolutions, we run the same settings. Civilization 6 has sliders for MSAA, Performance Impact and Memory Impact. The latter two refer to detail and texture size respectively, and are rated between 0 (lowest) to 5 (extreme). We run our Civ6 benchmark in position four for performance (ultra) and 0 on memory, with MSAA set to 2x.

For reviews where we include 8K and 16K benchmarks (Civ6 allows us to benchmark extreme resolutions on any monitor) on our GTX 1080, we run the 8K tests similar to the 4K tests, but the 16K tests are set to the lowest option for Performance.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p
 
4K
 
8K

16K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K
 

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Civilization 6 Conclusion

In all our testing scenarios, AMD wins at 1080p with minor margins on the frame rates but considerable gains in the time under analysis. Intel pushes ahead in almost all of the 4K results, except with the time under analysis at 4K using an R9 Fury, perhaps indicating that AMD is offering a steadier range in its frame rate, despite the average being lower.

Benchmarking Performance: CPU Legacy Tests Gaming Performance: Ashes of the Singularity Escalation (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • MTEK - Monday, July 24, 2017 - link

    Random amusement: Sandy Bridge got 1st place in the Shadow of Mordor bench w/ a GTX 1060.
  • shabby - Monday, July 24, 2017 - link

    That's funny and sad at the same time unfortunately.
  • mapesdhs - Monday, July 24, 2017 - link

    S'why I love my 5GHz 2700K (daily system). And the other one (gaming PC). And the third (benchmarking rig), the two I've sold to companies, another built for a friend, another set aside to sell, another on a shelf awaiting setup... :D 5GHz every time. M4E, TRUE, one fan, 5 mins, done.
  • GeorgeH - Monday, July 24, 2017 - link

    Those decreased overclocking performance numbers aren't just red flags, they're blinding red flashing lights with the power of a thousand suns.

    Seriously, that should have been the entire article - this platform is a disaster if it loses performance under sustained load. That's not hyperbole, it's cold hard truth. Sustained load is part of what HEDT is about, and with X299 you're spending more money for significantly less performance?

    I sincerely hope you're going to get to the bottom of this and not just shrug and let it slide away as a mystery. Hopefully it's just platform immaturity that gets ironed out, but at the present time I have absolutely no clue how you could recommend X299 in any way. Significantly less sustained performance is a do not pass go, do not collect $200, turn the car around, oh hell no, all caps showstopper.
  • deathBOB - Monday, July 24, 2017 - link

    But they're big AVX workloads. We know heat and power get a bit crazy with the AVX, and at some point we should just step back and realize that overclocking may not be appropriate for these workloads.
  • GeorgeH - Monday, July 24, 2017 - link

    But other AVX workloads didn't have the issue.

    Until we know exactly what is going on and what will be required to fix it, I can't comprehend how anyone can regard X299, at least with the quad core CPUs, as anything but "Nope". Maybe a BIOS update will help, or tuning the overclock, but maybe it'll require new motherboard revisions or delidding the CPU. I'm sure it'll get fixed/understood at some point, but for now recommending this platform is really hard to accept as a good idea.
  • MrSpadge - Monday, July 24, 2017 - link

    > But other AVX workloads didn't have the issue.

    Using a few of those instructions is different from hammering the CPU with them. Not sure what this software does, but this could easily explain it.
  • Icehawk - Monday, July 24, 2017 - link

    I do a lot of Handbrake encoding to HEVC which will peg all cores on my O/C'd 3770, it uses AVX but obviously a much older version with less functionality, and I can have it going indefinitely without issue.

    I've looked at the 7800\7820 as an upgrade but if they cannot sustain performance with a reasonable cooling setup then there is no point. The KBL-X parts don't offer enough of a performance improvement to be worth the cost of the X299 mobo which also seem to be having teething problems.

    Future proofing is laughable, let's say you bought a 7740x today with the thought of upgrading in two years to a higher core count proc - how likely is it that your motherboard and the new proc will have the same pinout? History says it ain't happening at Camp Intel.

    At this point I'm giving a hard pass to this generation of Intel products and hope that v2 will fix these issues. By then AMD may have come close enough in ST performance where I would consider them again, I really want the best ST & MT performance I can get in the $350 CPU zone which has traditionally been the top i7. AMD's MT performance almost tempts me to just build an encoding box.

    I loved my Athlon back in the day, anyone remember Golden Fingers? :D
  • mapesdhs - Monday, July 24, 2017 - link

    Golden Fingers... I had to look that up, blimey! :D
  • DrKlahn - Tuesday, July 25, 2017 - link

    I recently went from a 4.6GHz 3770K to a 1700X @ 4GHz at home. I play some older games that don't thread well (WoW being one of them). The Ryzen is at least as fast or faster in those workloads. Run Handbrake or Sony Movie Studio and the Ryzen is MUCH faster. We use built 6 core 5820K stations at work for some users and have recently added Ryzen 1600 stations due to the tremendous cost savings. We have yet to run into any tangible difference between the two platforms.

    Intel does have a lead in ST, but tests like these emphasize it to the point it seems like a bigger advantage than it is in reality. The only time I could see the premium worth it is if you have a task that needs ST the majority of the time (or a program is simply very poorly optimized for Ryzen). Otherwise AMD is offering an extraordinary value and as you point out AM4 will at least be supported for 2 more spins.

Log in

Don't have an account? Sign up now