Gaming: Shadow of the Tomb Raider (DX12)

The latest instalment of the Tomb Raider franchise does less rising and lurks more in the shadows with Shadow of the Tomb Raider. As expected this action-adventure follows Lara Croft which is the main protagonist of the franchise as she muscles through the Mesoamerican and South American regions looking to stop a Mayan apocalyptic she herself unleashed. Shadow of the Tomb Raider is the direct sequel to the previous Rise of the Tomb Raider and was developed by Eidos Montreal and Crystal Dynamics and was published by Square Enix which hit shelves across multiple platforms in September 2018. This title effectively closes the Lara Croft Origins story and has received critical acclaims upon its release.

The integrated Shadow of the Tomb Raider benchmark is similar to that of the previous game Rise of the Tomb Raider, which we have used in our previous benchmarking suite. The newer Shadow of the Tomb Raider uses DirectX 11 and 12, with this particular title being touted as having one of the best implementations of DirectX 12 of any game released so far.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Shadow of the Tomb Raider Action Sep
2018
DX12 720p
Low
1080p
Medium
1440p
High
4K
Highest
*Strange Brigade is run in DX12 and Vulkan modes

All of our benchmark results can also be found in our benchmark engine, Bench.

SoTR IGP Low Medium High
Average FPS
95th Percentile

Diving into Shadow of the Tomb Raider, we have another game that’s mostly GPU-bound at its 1080p settings. At 1080p Medium the 9900K is actually a step behind the 7900K – noisy results in their purest form – while at 720p Low it’s still technically behind the 9700K. Either way, once we turn down our settings low enough to remove the GPU bottleneck, its overall another typical showing for the new CFL-R processors. Intel’s latest and greatest is several percent ahead of its predecessors, but none of these games are in a position to really take advantage of the extra two cores. So instead it’s all about frequency and L3 caches.

Though this game (like so many others) does seem to reinforce the idea that the 9600K is the new 8700K. The 8700K is still ahead by a few frames at CPU-bound settings, but despite losing HT, the 9600K is still hanging in the fight for a noticeably lower price.

Gaming: Far Cry 5 Gaming: F1 2018
Comments Locked

274 Comments

View All Comments

  • 0ldman79 - Friday, October 19, 2018 - link

    There are certainly occasions where more cores are better than clock speed.

    Just look at certain mining apps. You can drop the power usage by half and only lose a little processing speed, but drop them to 2 cores at full power instead of 4 and it is a *huge* drop. Been playing with the CPU max speed in Windows power management on my various laptops. The Skylake i5 6300HQ can go down to some seriously low power levels if you play with it a bit. The recent Windows updates have lost a lot of the Intel Dynamic Thermal control though. That's a shame.
  • Makaveli - Friday, October 19, 2018 - link

    Power consumption rules on mobiles parts why would they release an 8 core model?
  • notashill - Friday, October 19, 2018 - link

    Because you get more performance at the same power level using more cores at lower clocks. The additional cores are power gated when not in use.
  • evernessince - Saturday, October 20, 2018 - link

    Not judging by the power consumption and heat output displayed here.
  • mkaibear - Friday, October 19, 2018 - link

    9700K is definitely the way to go on the non-HEDT. 9900K is technically impressive but the heat? Gosh.

    It's definitely made me consider waiting for the 9800X though - if the 7820X full load power is 145W ("TDP" 140W) at 3.6/4.3, then the 9800X isn't likely to be too much higher than that at 3.8/4.5.

    Hrm.
  • Cooe - Friday, October 19, 2018 - link

    "9700K is definitely the way to go on the non-HEDT."

    I think you meant to say "Ryzen 5 2600 unless your GPU's so fast, it'll HEAVILY CPU-bind you in gaming" but spelt it wrong ;). The 9700K is a vey good CPU, no doubt, but to claim it the undisputed mainstream champ at it's currently mediocre bang/$ value (so important for the mainstream market) doesn't make any sense, or accurately represent what people in the mainstream are ACTUALLY buying (lots of Ryzen 5 2600's & i5-8400's; both with a MUCH saner claim to the "best overall mainstream CPU" title).
  • mkaibear - Saturday, October 20, 2018 - link

    No, I meant to say "9700K is definitely the way to go on the non-HEDT".

    Don't put words in people's mouth. I don't just game. The video encoding tests in particular are telling - I can get almost a third better performance with the 9700K than I can the r5 2600x.

    >"best overall mainstream CPU" title

    Please don't straw man either. Nowhere did I say that it was the best overall mainstream CPU (that's the R7 2700X in my opinion), but for my particular use case the 9700K or the 9800X are better suited at present.
  • koaschten - Friday, October 19, 2018 - link

    Uhm yeah... so where are the 9900k overclocking results the article claims are currently being uploaded? :)
  • watzupken - Friday, October 19, 2018 - link

    The i9 processor is expected to be quite impressive in performance. However this review also reveals that Intel is struggling to pull more tricks out of their current 14nm and Skylake architect. The lack of IPC improvement over the last few generations is just forcing them to up the clockspeed to continue to cling on to their edge. Considering that they are launching the new series this late in the year, they are at risk of AMD springing a surprise with their 7nm Zen 2 slated to launch next year.
  • SquarePeg - Friday, October 19, 2018 - link

    If the rumored 13% IPC and minimum 500mhz uplift are for real with Zen 2 then AMD would take the performance crown. I'm not expecting very high clocks from Intel's relaxed 10nm process so it remains to be seen what kind of IPC gain they can pull with Ice Lake. It wouldn't surprise me if they had a mild performance regression because of how long they had to optimize 14nm for clock speed. Either way I'm all in on a new Ryzen 3 build next year.

Log in

Don't have an account? Sign up now