Rise of the Tomb Raider

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Where the old game had one benchmark scene, the new game has three different scenes with different requirements: Geothermal Valley (1-Valley), Prophet’s Tomb (2-Prophet) and Spine of the Mountain (3-Mountain) - and we test all three. These are three scenes designed to be taken from the game, but it has been noted that scenes like 2-Prophet shown in the benchmark can be the most CPU limited elements of that entire level, and the scene shown is only a small portion of that level. Because of this, we report the results for each scene on each graphics card separately.

 

Graphics options for RoTR are similar to other games in this type, offering some presets or allowing the user to configure texture quality, anisotropic filter levels, shadow quality, soft shadows, occlusion, depth of field, tessellation, reflections, foliage, bloom, and features like PureHair which updates on TressFX in the previous game.

Again, we test at 1920x1080 and 4K using our native 4K displays. At 1080p we run the High preset, while at 4K we use the Medium preset which still takes a sizable hit in frame rate.

It is worth noting that RoTR is a little different to our other benchmarks in that it keeps its graphics settings in the registry rather than a standard ini file, and unlike the previous TR game the benchmark cannot be called from the command-line. Nonetheless we scripted around these issues to automate the benchmark four times and parse the results. From the frame time data, we report the averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.


1080p

4K

Gaming Performance: Shadow of Mordor Gaming Performance: Rocket League
Comments Locked

545 Comments

View All Comments

  • YukaKun - Saturday, April 21, 2018 - link

    Oh, I'm actually curious about your experience with all the systems.

    I'm still running my i7 2700K at ~4.6Ghz. I do agree I haven't felt that it's a ~2012 CPU and it does everything pretty damn well still, but I'd like to know if you have noticed a difference between the new AMD and your Sandy Bridge. Same for when you assemble the 2700X.

    I'm trying to find an excuse to get the 2700X, but I just can't find one, haha.

    Cheers!
  • Luckz - Monday, April 23, 2018 - link

    The the once in a lifetime chance to largely keep your CPU name (2700K => 2700X) should be all the excuse you need.
  • YukaKun - Monday, April 23, 2018 - link

    That is so incredibly superficial and dumb... I love it!

    Cheers!
  • mapesdhs - Monday, April 23, 2018 - link

    YukaKun, your 2700K is only at 4.6? Deary me, should be 5.0 and proud, doable with just a basic TRUE and one fan. 8) For reference btw, a 2700K at 5GHz gives the same threaded performance as a 6700K at stock.

    And I made a typo in my earlier reply, mentioned the wrong XEON model, should have been the 2680 V2.
  • YukaKun - Tuesday, April 24, 2018 - link

    For daily usage and stability, I found that 4.6Ghz worked best in terms of noise/heat/power ratios.

    I also did not disable any power saving features, so it does not work unnecessarily when not under heavy load.

    I'm using AS5 with a TT Frio (the original one) on top, so it's whisper quiet at 4.6Ghz and I like it like that. When I made it work at 5Ghz, I found I had to have the fans near 100%, so it wasn't something I'd like, TBH.

    But, all of this to say: yes, I've done it, but settled with 4.6Ghz.

    Cheers!
  • mapesdhs - Friday, March 29, 2019 - link

    (an old thread, but in case someone comes across it...)

    I use dynamic vcore so I still get the clock/voltage drops when idle. I'm using a Corsair H80 with 2x NDS 120mm PWM, so also quiet even at full load; no need for such OTT cooling to handle the load heat, but using an H80 means one can have low noise aswell. An ironic advantage of the lower thermal density of the older process sizes. Modern CPUs with the same TDP dump it out in a smaller area, making it more difficult to keep cool.

    Having said that, I've been recently pondering an upgrade to have much better general idle power draw and a decent bump for threaded performance. Considering a Ryzem 5 2600 or 7 2700, but might wait for Zen2, not sure yet.
  • moozooh - Sunday, April 22, 2018 - link

    No, it might have to do with the fact that the 8350K has 1.5x the cache size and beastly per-thread performance that is also sustained at all times—so it doesn't have to switch from a lower-powered state (which the older CPUs were slower at), nor does it taper off as other cores get loaded, which is most noticeable on the the things Samus mentioned, ie. "boot times, app launches and gaming". Boot times and app launches are both essentially single-thread tasks with no prior context, and gaming is where a CPU upgrade like that will improve worst-case scenarios by at least an order of magnitude, which is really what's most noticeable.

    For instance, if your monitor is 60Hz and your average framerate is 70, you won't notice the difference between 60 and 70—you will only notice the time spent under 60. Even a mildly overclocked 8350K is still the one of best gaming CPUs for this reason, easily rivaling or outperforming previous-gen Ryzens in most cases and often being on par with the much more expensive 8700K where thread count isn't as important as per-thread performance for responsiveness and eliminating stutters. When pushed to or above 5 GHz, I'm reasonably certain it will still give many of the newer, more expensive chips, a run for their money.
  • spdragoo - Friday, April 20, 2018 - link

    Memory prices? Memory prices are still pretty much the way they've always been:
    -- faster memory costs (a little) more than slower memory
    -- larger memory sticks/kits cost (a little) more than smaller sticks/kits
    -- last-gen RAM (DDR3) is (very slightly) cheaper than current-gen RAM (DDR4)

    I suppose you can wait 5 billion years for the Sun to fade out, at which point all RAM (or whatever has replaced it by then) will have the same cost ($0...since no one will be around to buy or sell it)...but I don't think you need to worry about that.
  • Ferrari_Freak - Friday, April 20, 2018 - link

    You didn't write anything about price there... All you've said is that relative pricing for things is the same it has always been, and that's no surprise.

    The $$$ cost of any give stick is more than it was a year or two ago. 2x8gb DDR4-3200 G.Skill Ripjaws V is $180 on Newegg today. It was $80 two years ago. Clearly not the way they've always been...
  • James5mith - Friday, April 20, 2018 - link

    2x16GB Crucial DDR4-2400 SO-DIMM kit.

    https://www.amazon.com/gp/product/B019FRCV9G/

    November 29th 2016 (when I purchased): $172

    Current Amazon price for exact same kit: $329

Log in

Don't have an account? Sign up now