Xe-LP GPU Performance: Deus Ex Mankind Divided

Deus Ex is a franchise with a wide level of popularity. Despite the Deus Ex: Mankind Divided (DEMD) version being released in 2016, it has often been heralded as a game that taxes the CPU. It uses the Dawn Engine to create a very complex first-person action game with science-fiction based weapons and interfaces. The game combines first-person, stealth, and role-playing elements, with the game set in Prague, dealing with themes of transhumanism, conspiracy theories, and a cyberpunk future. The game allows the player to select their own path (stealth, gun-toting maniac) and offers multiple solutions to its puzzles.

DEMD has an in-game benchmark, an on-rails look around an environment showcasing some of the game’s most stunning effects, such as lighting, texturing, and others. Even in 2020, it’s still an impressive graphical showcase when everything is jumped up to the max.

Deus Ex Mankind Divided: 600p Minimum QualityDeus Ex Mankind Divided: 1080p Maximum Quality

At the minimum settings, all of the integrated graphics are easily playable, with AMD winning at 15 W but the 28 W Tiger Lake goes a bit above that, within reaching distance of the desktop APU. At a more regular 1080p Maximum, the 20 FPS is perhaps a bit too slow for regular gameplay.

Xe-LP GPU Performance: Civilization VI Xe-LP GPU Performance: Final Fantasy XIV
Comments Locked

253 Comments

View All Comments

  • JfromImaginstuff - Friday, September 18, 2020 - link

    Intel is planning to release a 8 core 16 thread SKU, confirmed by one of their management can't remember his name but when that'll reach the market is a question mark
  • RedOnlyFan - Friday, September 18, 2020 - link

    With the space and power constraints you can choose to pack more cores or other features that are also very important.
    So Intel chose to add 4c + the best igpu + AI + neural engine + thunderbolt + Wi-Fi 6 + pcie4.
    Amd chose 8cores and a decent igpu.
    So we have to choose between raw power and more useful package.

    For a normal everyday use an all round performance is more important. There are millions who don't even know what cinebench is for.
  • Spunjji - Friday, September 18, 2020 - link

    Weird that you're calling it "the best iGPU" when the benchmarks show that it's pretty much equivalent to Vega 8 in most tests at 15W with LPDDR4X, which is how it's going to be in most notebooks.

    Funny also that you're proclaiming PCIe 4 to be a "useful feature" when the only thing out there that will use it in current notebooks is the MX450, which obviates that iGPU.

    I could go on but really, Thunderbolt is the only one I'd say is a reasonable argument. A bunch of AMD laptops already have Wi-Fi 6
  • JayNor - Saturday, September 19, 2020 - link

    but Intel has lpddr5 support built in. Raising memory data rate by around 25% is something that should show up broadly as more performance in the benchmarks.

    Intel's Tiger Lake Blueprint Session benchmarks were run with lpddr4x, btw, so expect better performance when lpddr5 laptops become available.

    https://edc.intel.com/content/www/us/en/products/p...
  • Spunjji - Saturday, September 19, 2020 - link

    I understand and agree. My point was, what does "support" matter if it's not actually useable in the product? This will be an advantage when devices with it release. Right now, it's irrelevant.
  • abufrejoval - Friday, September 18, 2020 - link

    I'd say going for the biggest volume market (first).

    Adding cores costs silicon real-estate and profit per wafer and the bulk of the laptop market evidently doesn't want to pay double for eight cores at 15 Watts.

    Being a fab, Intel doesn't seem to mind doing lots of chip variants, for AMD it seems to make more sense to go for volume and fewer variants. The AMD 8 core APU covers a lot of desktop area, but also laptops, where Intel just does distinct 8 core chip.

    Intel might even do distinct iGPU variants at higher CPU cores (not just via binning), because the cost per SoC layout is calculated differently.... at least as long as they can keep up the volumes.

    I'm pretty sure they had a lot of smart guys run the numbers, doesn't mean things might not turn out differently.
  • Drumsticks - Thursday, September 17, 2020 - link

    Regarding:

    Compromises that had been made when increasing the cache by this great of an amount is in the associativity, which now increases from 8-way to a 20-way, which likely increases conflict misses for the structure.

    On the L3 side, there’s also been a change in the microarchitecture as the cache slice size per core now increases from 2MB to 3MB, totalling to 12MB for a 4-core Tiger Lake design. Here Intel was actually able to reduce the associativity from 16-way to 12-way, likely improving cache line conflict misses and improving access parallelism.

    ---

    Doesn't increasing cache associativity *decrease* conflict misses? Your maximum number of conflict misses would be a direct mapped cache, where everything can go into only one place, and your minimum number of conflict misses would be a fully associative cache, where everything can go everywhere.

    Also, isn't it weird that latency increases with the reduced associativity of the new L3? I guess the fact that it's 50% larger could have a larger impact, but I'd have thought reducing associativity should improve latency and vice versa, even if only slightly.
  • Drumsticks - Thursday, September 17, 2020 - link

    Later on, there is:

    The L2 seemingly has gone up from 13 cycles to 14 cycles in Willow Cove, which isn’t all that bad considering it is now 2.5x larger, even though its associativity has gone down.

    ---

    But in the table, associativity is listed as going from 8 way to 20 way. Is something mixed up in the table?
  • AMDSuperFan - Thursday, September 17, 2020 - link

    How does this compare with Big Navi? It seems that Big Navi will be much faster than this right?
  • Spunjji - Friday, September 18, 2020 - link

    🤡

Log in

Don't have an account? Sign up now