Gaming Tests: Civilization 6

Originally penned by Sid Meier and his team, the Civilization series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but I have played every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, and it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

For this benchmark, we are using the following settings:

  • 480p Low, 1440p Low, 4K Low, 1080p Max

For automation, Firaxis supports the in-game automated benchmark from the command line, and output a results file with frame times. We do as many runs within 10 minutes per resolution/setting combination, and then take averages and percentiles.

AnandTech Low Res
Low Qual
Medium Res
Low Qual
High Res
Low Qual
Medium Res
Max Qual
Average FPS
95th Percentile

Civ 6 has always been a fan of fast CPU cores and low latency, so perhaps it isn't much of a surprise to see the Core i7 here beat out the latest processors. The Core i7 seems to generate a commanding lead, whereas those behind it seem to fall into a category around 94-96 FPS at 1080p Max settings.

For our Integrated Tests, we run the first and last combination of settings.

IGP Civilization 6 480p Low (Average FPS)IGP Civilization 6 1080p Max (Average FPS)

When we use the integrated graphics, Broadwell isn't particularly playable here.

All of our benchmark results can also be found in our benchmark engine, Bench.

Gaming Tests: Chernobylite Gaming Tests: Deus Ex Mankind Divided
Comments Locked

120 Comments

View All Comments

  • brucethemoose - Monday, November 2, 2020 - link

    Is HBM2e access latency really lower than DDR4/5?

    I cant find any timing info or benchmarks, but my understanding is that its lower than GDDR6, which already has much higher latency than DDR4.
  • PeachNCream - Monday, November 2, 2020 - link

    I'd like to say thanks for this review! I really love the look backwards at older hardware in relationship to modern systems. It really shows that in processor power terms that Broadwell/Haswell remain fairly relevant and the impact of eDRAM (or non-impact in various workloads) makes for really interesting reading.
  • brucethemoose - Monday, November 2, 2020 - link

    Another possibility: the "Radeon Cache" on an upcoming APU acts as a last level cache for the entire chip, just like Apple (and Qualcomm?) SoCs.

    Theres no extra packaging costs, no fancy 2nd chip, and it would save power.
  • Jorgp2 - Monday, November 2, 2020 - link

    You do realize that Intel has had that about as long as they've had GPUs on their CPUs right?
  • brucethemoose - Monday, November 2, 2020 - link

    You mean the iGPUs share L3?

    Well, its wasn't a particularly large cache or powerful GPU until Broadwell came around.
  • Jorgp2 - Tuesday, November 3, 2020 - link

    >Well, its wasn't a particularly large cache or powerful GPU until Broadwell came around.

    Larger than the caches on even AMDs largest GPUs until recently.

    Hawaii had a 4MB cache, Vega had a 6MB I believe.
  • eastcoast_pete - Monday, November 2, 2020 - link

    Thanks Ian, great article! Regarding a large, external L4 Cache: any guess on how speed and latency of eDRAM made in more modern silicon would compare with Broadwell's 22 nm one? Let's say if made in Intel's current 14 nm (++ etc)? And, if that'll speed it up enough to make it significantly better than current fast DDR4, would that be a way for Intel to put some "electronic nitrous" on its Tiger Lake and Rocket Lake chips? Because they do need something, or they'll get spanked badly by the new Ryzens.
  • brucethemoose - Monday, November 2, 2020 - link

    I'm guessing most of the latency comes from the travel between the chips, not from the speed of the eDRAM itself. So a shrink wouldnt help much, but EMIB might?

    There is talk of replacing on-chip SRAM in L3 cache with eDRAM, kind of like what IBM already does. So basically, its a size vs speed tradeoff, which is very interesting indeed.
  • quadibloc - Monday, November 2, 2020 - link

    Well, AMD seems to think it was a good idea, given the 128 MB Infinity Cache on their latest graphics cards...
  • Leeea - Monday, November 2, 2020 - link

    Close, but not quite the same.

    AMD has their infinity cache in the GPU die. One piece of silicon for the whole thing. This may have faster I/O and less power consumption.

    Intel's eDRAM caches were separate a separate piece of silicon all together.

Log in

Don't have an account? Sign up now