Discrete GPU Gaming Tests

4K Minimum with RTX 2080 Ti

By contrast to 1080p Max, our 4K Minimum testing aims at finding differences between CPUs at a playable resolution. There is still pixels to churn, but the 2080 Ti should at least be hitting 60 FPS in most games here.

A full list of results at various resolutions and settings can be found in our Benchmark Database.

(a-3) Chernobylite - 4K Low - Average FPS

No real change in Chernobylite.

(b-5) Civilization VI - 4K Min - Average FPS(b-6) Civilization VI - 4K Min - 95th Percentile

Civilization 6 gets a smaller increase in performance here than the 1080p Maximum test, but there's still a benefit over the previous generation. That being said, the AMD desktop CPUs with more cache pull ahead a lot here.

(c-5) Deus Ex MD - 4K Min - Average FPS(c-6) Deus Ex MD - 4K Min - 95th Percentile

Deus Ex seems to come to an asymptotic limit, and while the 4000G APUs were behind the curve, the 5000G APUs are solidly there.

(e-5) Final Fantasy 15 - 4K Standard - Average FPS

All CPUs pretty much hit a limit in FF15 above and Far Cry 5 below.

(i-5) Far Cry 5 - 4K Low - Average FPS(i-6) Far Cry 5 - 4K Low - 95th Percentile

(k-5) Grand Theft Auto V - 4K Low - Average FPS(k-6) Grand Theft Auto V - 4K Low - 95th Percentile

GTA 5 hits an odd glitchy mess around 180 FPS, and the new 5000G CPUs can push the RTX 2080 Ti in that direction a bit further - at this point it's probably best to start cranking up some detail to avoid it.

Discrete GPU Gaming Tests: 1080p Max with RTX 2080 Ti Integrated Graphics Tests: Finding 60 FPS
Comments Locked

135 Comments

View All Comments

  • abufrejoval - Thursday, August 5, 2021 - link

    There are indeed so many variables and at least as many shortages these days. And it's becoming a playground for speculators, who are just looking for such fragilities in the suppy chain to extort money.

    I remember some Kaveri type chips being sold by AMD, which had the GPU parts chopped off by virtue of being "borderline dies" on a round 300mm wafer. Eventually they also had enough of these chips with the CPU (and SoC) portion intact, to sell them as a "GPU-less APU".

    Don't know if the general layout of the dies allows for such "halflings" on the left or right of a wafer...
  • mode_13h - Wednesday, August 4, 2021 - link

    Ian, please publish the source of 3DPM, preferably to github, gitlab, etc.
  • mode_13h - Wednesday, August 4, 2021 - link

    For me, the fact that 5600X always beats 5600G is proof that the non-APUs' lack of an on-die memory controller is no real deficiency (nor is the fact that the I/O die is fabbed on an older process node).
  • GeoffreyA - Thursday, August 5, 2021 - link

    The 5600X's bigger cache and boost could be helping it in that regard. But, yes, I don't think the on-die memory controller makes that much of a difference compared to the on-package one.
  • mode_13h - Friday, August 6, 2021 - link

    I wrote that knowing about the cache difference, but it's not going to help in all cases. If the on-die memory controller were a real benefit over having it on the I/O die, I'd expect to see at least a couple benchmarks where the 5600G outperformed the 5600X. However, they didn't switch places, even once!

    I know the 5600X has a higher boost clock, but they're both 65W and the G has a higher base frequency. So, even on well-threaded, non-graphical benchmarks, it's quite telling that the G can never pass the X.
  • GeoffreyA - Friday, August 6, 2021 - link

    Remember how the Core 2 Duo left the Athlon 64 dead on the floor? And that was without an on-die MC.
  • mode_13h - Saturday, August 7, 2021 - link

    That's not relevant, since there were incredible differences in their uArch and fab nodes.

    In this case, we get to see Zen 3 cores on the same manufacturing process. So, it should be a very well-controlled comparison. Still not perfect, but about as close as we're going to get.

    Also, the memory controller is in-package, in both cases. The main difference of concern is whether or not it's integrated into the 7 nm compute die.
  • GeoffreyA - Saturday, August 7, 2021 - link

    In agreement with what you are saying, even in my first comment. I think Cezanne shows that having the memory controller on the package gets the critical gains (vs. the old northbridge), and going onto the main die doesn't add much more.

    As for K8 and Conroe, I always felt it was notable in that C2D was able to do such damage, even without an IMC. Back when K8 was the top dog, the tech press used to make a big deal about its IMC, as if there were no other improvements besides that.
  • mode_13h - Sunday, August 8, 2021 - link

    One bad thing about moving it on-die is that this gave Intel an excuse to tie ECC memory support to the CPU, rather than just the motherboard. I had a regular Pentium 4 with ECC memory, and all it required was getting a motherboard that supported it.

    As I recall, the main reason Intel lagged in moving it on-die is that they were still flirting with RAMBUS, which eventually went pretty much nowhere. At work, we built one dual-CPU machine that required RAMBUS memory, but that was about the only time I touched the stuff.

    As for the benefits of moving it on-die, it was seen as one of the reasons Opteron was able to pull ahead of Pentium 4. Then, when Nehalem eventually did it, it was seen as one of the reasons for its dominance over Core 2.
  • GeoffreyA - Sunday, August 8, 2021 - link

    Intel has a fondness for technologies that go nowhere. RAMBUS was supposed to unlock the true power of the Pentium 4, whatever that meant. Well, the Willamette I used for a decade had plain SDRAM, not even DDR. But that was a downgrade, after my Athlon 64 3000+ gave up the ghost (cheapline PSU). That was DDR400. Incidentally, when the problems began, they were RAM related. Oh, those beeps!

Log in

Don't have an account? Sign up now