Xe-LP GPU Performance: Civilization VI

Originally penned by Sid Meier and his team, the Civilization series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but I have played every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, and it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Civilization 6, 480p Minimum QualityCivilization 6, 1080p Maximum Quality

Civ6 is a game that enjoys lots of CPU performance, so we can see the desktop APU out front here. The eight cores of the 4800U get ahead of the 15 W version of Tiger Lake in both of our tests, although the 28 W power mode gets an 8% lead in the CPU-limited test.

CPU Performance: Encoding and Rendering Xe-LP GPU Performance: Deus Ex Mankind Divided
Comments Locked

253 Comments

View All Comments

  • JfromImaginstuff - Friday, September 18, 2020 - link

    Intel is planning to release a 8 core 16 thread SKU, confirmed by one of their management can't remember his name but when that'll reach the market is a question mark
  • RedOnlyFan - Friday, September 18, 2020 - link

    With the space and power constraints you can choose to pack more cores or other features that are also very important.
    So Intel chose to add 4c + the best igpu + AI + neural engine + thunderbolt + Wi-Fi 6 + pcie4.
    Amd chose 8cores and a decent igpu.
    So we have to choose between raw power and more useful package.

    For a normal everyday use an all round performance is more important. There are millions who don't even know what cinebench is for.
  • Spunjji - Friday, September 18, 2020 - link

    Weird that you're calling it "the best iGPU" when the benchmarks show that it's pretty much equivalent to Vega 8 in most tests at 15W with LPDDR4X, which is how it's going to be in most notebooks.

    Funny also that you're proclaiming PCIe 4 to be a "useful feature" when the only thing out there that will use it in current notebooks is the MX450, which obviates that iGPU.

    I could go on but really, Thunderbolt is the only one I'd say is a reasonable argument. A bunch of AMD laptops already have Wi-Fi 6
  • JayNor - Saturday, September 19, 2020 - link

    but Intel has lpddr5 support built in. Raising memory data rate by around 25% is something that should show up broadly as more performance in the benchmarks.

    Intel's Tiger Lake Blueprint Session benchmarks were run with lpddr4x, btw, so expect better performance when lpddr5 laptops become available.

    https://edc.intel.com/content/www/us/en/products/p...
  • Spunjji - Saturday, September 19, 2020 - link

    I understand and agree. My point was, what does "support" matter if it's not actually useable in the product? This will be an advantage when devices with it release. Right now, it's irrelevant.
  • abufrejoval - Friday, September 18, 2020 - link

    I'd say going for the biggest volume market (first).

    Adding cores costs silicon real-estate and profit per wafer and the bulk of the laptop market evidently doesn't want to pay double for eight cores at 15 Watts.

    Being a fab, Intel doesn't seem to mind doing lots of chip variants, for AMD it seems to make more sense to go for volume and fewer variants. The AMD 8 core APU covers a lot of desktop area, but also laptops, where Intel just does distinct 8 core chip.

    Intel might even do distinct iGPU variants at higher CPU cores (not just via binning), because the cost per SoC layout is calculated differently.... at least as long as they can keep up the volumes.

    I'm pretty sure they had a lot of smart guys run the numbers, doesn't mean things might not turn out differently.
  • Drumsticks - Thursday, September 17, 2020 - link

    Regarding:

    Compromises that had been made when increasing the cache by this great of an amount is in the associativity, which now increases from 8-way to a 20-way, which likely increases conflict misses for the structure.

    On the L3 side, there’s also been a change in the microarchitecture as the cache slice size per core now increases from 2MB to 3MB, totalling to 12MB for a 4-core Tiger Lake design. Here Intel was actually able to reduce the associativity from 16-way to 12-way, likely improving cache line conflict misses and improving access parallelism.

    ---

    Doesn't increasing cache associativity *decrease* conflict misses? Your maximum number of conflict misses would be a direct mapped cache, where everything can go into only one place, and your minimum number of conflict misses would be a fully associative cache, where everything can go everywhere.

    Also, isn't it weird that latency increases with the reduced associativity of the new L3? I guess the fact that it's 50% larger could have a larger impact, but I'd have thought reducing associativity should improve latency and vice versa, even if only slightly.
  • Drumsticks - Thursday, September 17, 2020 - link

    Later on, there is:

    The L2 seemingly has gone up from 13 cycles to 14 cycles in Willow Cove, which isn’t all that bad considering it is now 2.5x larger, even though its associativity has gone down.

    ---

    But in the table, associativity is listed as going from 8 way to 20 way. Is something mixed up in the table?
  • AMDSuperFan - Thursday, September 17, 2020 - link

    How does this compare with Big Navi? It seems that Big Navi will be much faster than this right?
  • Spunjji - Friday, September 18, 2020 - link

    🤡

Log in

Don't have an account? Sign up now