Gaming Performance: 1080p

Moving along, here's a look at a more balanced gaming scenario, running games at 1080p with maximum image quality.

Civilization VI

 

(a-7) Civilization VI - 1080p Max - Average FPS(a-8) Civilization VI - 1080p Max - 95th Percentile

World of Tanks

(b-3) World of Tanks - 1080p Standard - Average FPS(b-4) World of Tanks - 1080p Standard - 95th Percentile(b-5) World of Tanks - 1080p Max - Average FPS(b-6) World of Tanks - 1080p Max - 95th Percentile

Borderlands 3

(c-7) Borderlands 3 - 1080p Max - Average FPS(c-8) Borderlands 3 - 1080p Max - 95th Percentile

Grand Theft Auto V

(e-7) Grand Theft Auto V - 1080p Max - Average FPS(e-8) Grand Theft Auto V - 1080p Max - 95th Percentile

Red Dead Redemption 2

(f-7) Red Dead 2 - 1080p Max - Average FPS(f-8) Red Dead 2 - 1080p Max - 95th Percentile

F1 2022

(g-3) F1 2022 - 1080p Ultra High - Average FPS(g-4) F1 2022 - 1080p Ultra High - 95th Percentile

Hitman 3

(h-3) Hitman 3 - 1080p Ultra - Average FPS(h-4) Hitman 3 - 1080p Ultra - 95th Percentile

Total War: Warhammer 3

(i-2) Total War Warhammer 3 - 1080p Ultra - Average FPS

Cyberpunk 2077

(k-3) Cyberpunk 2077 - 1080p Ultra - Average FPS(k-4) Cyberpunk 2077 - 1080p Ultra - 95th Percentile

The 1920 x 1080p resolution is still popular with users (even I still game at 1080p), and looking at our results with our AMD Radeon RX 6950 XT graphics card, the 13th Gen Core series processors are highly competitive. In some cases, AMD's Ryzen 7 5800X3D with 96 MB of 3D V-Cache makes for a great value in gaming, even if it's not really on par with Ryzen 7000 or Intel's 13th Gen in compute performance.

There are certainly trade-offs depending on the title on whether the game favors AMD or Intel, but the key thing to take is, things are competitive, especially at 1080p gaming.

Gaming Performance: 720p And Lower Gaming Performance: 1440p
POST A COMMENT

169 Comments

View All Comments

  • mode_13h - Friday, October 21, 2022 - link

    "The new instruction cache on Gracemont is actually very unique. x86 instruction encoding is all over the place and in the worst (and very rare) case can be as long as 15 bytes long. Pre-decoding an instruction is a costly linear operation and you can’t seek the next instruction before determining the length of the prior one. Gracemont, like Tremont, does not have a micro-op cache like the big cores do, so instructions do have to be decoded each time they are fetched. To assist that process, Gracemont introduced a new on-demand instruction length decoder or OD-ILD for short. The OD-ILD generates pre-decode information which is stored alongside the instruction cache. This allows instructions fetched from the L1$ for the second time to bypass the usual pre-decode stage and save on cycles and power."

    Source: https://fuse.wikichip.org/news/6102/intels-gracemo...
    Reply
  • Sailor23M - Friday, October 21, 2022 - link

    Interesting to see Ryzen 5 7600X perform so well in excel/ppt benchmarks. Why is that so? Reply
  • Makste - Friday, October 21, 2022 - link

    Thank you for the review. So Intel too, is finally throwing more cores and increasing frequencies to the problem these days, which increases heat and power usage in turn. AMD too, is a culprit of this practice but has not gone to these lengths as Intel. 16 cores versus supposedly efficiency cores. What is not happening? Reply
  • ricebunny - Friday, October 21, 2022 - link

    It would be a good idea to highlight that the MT Spec benchmarks are just N instantiations of the single thread test. They are not indicative of parallel computing application performance. There are a few dedicated SPEC benchmarks for parallel performance but for some reason they are never included in Anandtechs benchmarks. Reply
  • Ryan Smith - Friday, October 21, 2022 - link

    "There are a few dedicated SPEC benchmarks for parallel performance but for some reason they are never included in Anandtechs benchmarks."

    They're not part of the actual SPEC CPU suite. I'm assuming you're talking about the SPEC Workstation benchmarks, which are system-level benchmarks and a whole other kettle of fish.

    With SPEC, we're primarily after a holistic look at the CPU architecture, and in the rate-N workloads, whether there's enough memory bandwidth and other resources to keep the CPU cores fed.
    Reply
  • wolfesteinabhi - Friday, October 21, 2022 - link

    its strange to me that when we are talking about value ...especially for budget constraint buyers ... who are also willing to let go of bleeding edge/performance ... we dont even mention AM4 platform.

    AM4 is still good ..if not great (not to say mature/stable) platform for many ..and you can still buy a lot of reasonably price good procs including 5800X3D ...and users have still chance to upgrade it upto 5950X if they need more cpu at a later date.
    Reply
  • cowymtber - Friday, October 21, 2022 - link

    Burning hot POS. Reply
  • BernieW - Friday, October 21, 2022 - link

    Disappointed that you didn't spend more time investigating the serious regression for the 13900K vs the 12900K in the 502.gc_r test. The single threaded test does not have the same regression so it's a curious result that could indicate something wrong with the test setup. Alternately, perhaps the 13900K was throttling during that part of the test or maybe E cores are really not good at compiling code. Reply
  • Avalon - Friday, October 21, 2022 - link

    I had that same thought. Why publish something so obviously anomalous and not even say anything about it? Did you try re-testing it? Did you accidentally flip the scores between the 12th and 13th gen? There's no obvious reason this should be happening given the few changes between 12th and 13th gen cores. Reply
  • Ryan Smith - Friday, October 21, 2022 - link

    "Disappointed that you didn't spend more time investigating the serious regression for the 13900K vs the 12900K in the 502.gc_r test."

    We still are. That was flagged earlier this week, and re-runs have produced the same results.

    So at this point we're digging into matters a bit more trying to figure out what is going on, as the cause is non-obvious. I'm thinking it may be a thread director hiccup or an issue with the ratio of P and E cores, but there's a lot of different (and weird) ways this could go.
    Reply

Log in

Don't have an account? Sign up now