Gaming: Civilization 6 (DX12)

Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Perhaps a more poignant benchmark would be during the late game, when in the older versions of Civilization it could take 20 minutes to cycle around the AI players before the human regained control. The new version of Civilization has an integrated ‘AI Benchmark’, although it is not currently part of our benchmark portfolio yet, due to technical reasons which we are trying to solve. Instead, we run the graphics test, which provides an example of a mid-game setup at our settings.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Civilization VI RTS Oct
2016
DX12 1080p
Ultra
4K
Ultra
8K
Ultra
16K
Low

All of our benchmark results can also be found in our benchmark engine, Bench.

Civilization 6 IGP Low Medium High
Average FPS
95th Percentile

Continuing the theme we’ve seen thus far, Civilization 6 is another game where the 9900K does provide some benefits, but not under all circumstances. The game is not particularly GPU-intensive to begin with, so at just 4K Ultra we’re still not entirely GPU limited; but past a Ryzen 7 2700X or so, all the CPUs start running together. We have to drop to 1080p Ultra to really pull the CPUs off of the dogpile, at which point the 9900K comes out in the lead.

This is another game that doesn’t seem to care about core counts so much as it does frequencies. So the 9900K has the strongest position here, while the 9700K brings up second place. But neither are very far from the 8700K, with Intel’s latest coming in at just 12% faster than their former flagship even at these CPU benchmarking sympathetic settings.

Curiously we also see the 9900K fall behind the 9700K at 4K and higher. The difference is easily close enough to be noise, but it might be a very slight impact of the lower-tier chips not having to share their cores with hyper-threading.

Gaming: Shadow of War Gaming: Ashes Classic (DX12)
Comments Locked

274 Comments

View All Comments

  • 3dGfx - Friday, October 19, 2018 - link

    game developers like to build and test on the same machine
  • mr_tawan - Saturday, October 20, 2018 - link

    > game developers like to build and test on the same machine

    Oh I thought they use remote debugging.
  • 12345 - Wednesday, March 27, 2019 - link

    Only thing I can think of as a gaming use for those would be to pass through a gpu each to several VMs.
  • close - Saturday, October 20, 2018 - link

    @Ryan, "There’s no way around it, in almost every scenario it was either top or within variance of being the best processor in every test (except Ashes at 4K). Intel has built the world’s best gaming processor (again)."

    Am I reading the iGPU page wrong? The occasional 100+% handicap does not seem to be "within variance".
  • daxpax - Saturday, October 20, 2018 - link

    if you noticed 2700x is faster in half benchmarks for games but they didnt include it
  • nathanddrews - Friday, October 19, 2018 - link

    That wasn't a negative critique of the review, just the opposite in fact: from the selection of benchmarks you provided, it is EASY to see that given more GPU power, the new Intel chips will clearly outperform AMD most of the time - generally with average, but specifically minimum frames. From where I'm sitting - 3570K+1080Ti - I think I could save a lot of money by getting a 2600X/2700X OC setup and not miss out on too many fpses.
  • philehidiot - Friday, October 19, 2018 - link

    I think anyone with any sense (and the constraints of a budget / missus) will be stupid to buy this CPU for gaming. The sensible thing to do is to buy the AMD chip that provides 99% of the gaming performance for half the price (even better value when you factor in the mobo) and then to plough that money into a better GPU, more RAM and / or a better SSD. The savings from the CPU alone will allow you to invest a useful amount more into ALL of those areas. There are people who do need a chip like this but they are not gamers. Intel are pushing hard with both the limitations of their tech (see: stupid temperatures) and their marketing BS (see: outright lies) because they know they're currently being held by the short and curlies. My 4 year old i5 may well score within 90% of these gaming benchmarks because the limitation in gaming these days is the GPU. Sorry, Intel, wrong market to aim at.
  • imaheadcase - Saturday, October 20, 2018 - link

    I like how you said limitations in tech and point to temps, like any gamer cares about that. Every game wants raw performance, and the fact remains intel systems are still easier to go about it. The reason is simple, most gamers will upgrade from another intel system and use lots of parts from it that work with current generation stuff.

    Its like the whole Gsync vs non gsync. Its a stupid arguement, its not a tax on gsync when you are buying the best monitor anyways.
  • philehidiot - Saturday, October 20, 2018 - link

    Those limitations affect overclocking and therefore available performance. Which is hardly different to much cheaper chips. You're right about upgrading though.
  • emn13 - Saturday, October 20, 2018 - link

    The AVX 512 numbers look suspicious. Both common sense and other examples online suggest that AVX512 should improve performance by much less than a factor 2. Additionally, AVX-512 causes varying amounts of frequency throttling; so you;re not going to get the full factor 2.

    This suggests to me that your baseline is somehow misleading. Are you comparing AVX512 to ancient SSE? To no vectorization at all? Something's not right there.

Log in

Don't have an account? Sign up now