Rise of the Tomb Raider

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Where the old game had one benchmark scene, the new game has three different scenes with different requirements: Geothermal Valley (1-Valley), Prophet’s Tomb (2-Prophet) and Spine of the Mountain (3-Mountain) - and we test all three. These are three scenes designed to be taken from the game, but it has been noted that scenes like 2-Prophet shown in the benchmark can be the most CPU limited elements of that entire level, and the scene shown is only a small portion of that level. Because of this, we report the results for each scene on each graphics card separately.

 

Graphics options for RoTR are similar to other games in this type, offering some presets or allowing the user to configure texture quality, anisotropic filter levels, shadow quality, soft shadows, occlusion, depth of field, tessellation, reflections, foliage, bloom, and features like PureHair which updates on TressFX in the previous game.

Again, we test at 1920x1080 and 4K using our native 4K displays. At 1080p we run the High preset, while at 4K we use the Medium preset which still takes a sizable hit in frame rate.

It is worth noting that RoTR is a little different to our other benchmarks in that it keeps its graphics settings in the registry rather than a standard ini file, and unlike the previous TR game the benchmark cannot be called from the command-line. Nonetheless we scripted around these issues to automate the benchmark four times and parse the results. From the frame time data, we report the averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.


1080p

4K

Gaming Performance: Shadow of Mordor Gaming Performance: Rocket League
Comments Locked

545 Comments

View All Comments

  • rocky12345 - Tuesday, April 24, 2018 - link

    They ran all systems at both Intel's & AMD's listed specs as such AMD's memory was at 2933MHz on Zen+ & 2666MHz on Intel's Coffee lake 8700K,they did the same for the older gen parts as well and ran those at the spec's listed for them as well.

    There have been a few other media outlets that did the same thing and got the same results or very close to the same results. AMD's memory bandwidth as in memory controller seems to give more bandwidth than Intel's does at the same speed so with Intel not running at 3200MHz like most media outlets did maybe Intel loses a lot of performance because of that and AMD lost next to nothing from not going 3200MHz. It is all just guesses on my part at the moment.

    Food for thought when Intel released the entire Coffee Lake line up they only released the z370 chip set which has full support for over clocking including the memory and almost all reviews were done with 3200MHz-3400MHz memory on the test beds even for the non K Coffee lakes CPU's. Maybe Intel knew this would happen and made sure all Coffee lakes looked their best in the reviews. For a few sites that retested once the lower tier chip sets were released the non K's using their rated memory speeds lost about 5%-7% performance in some cases a bit even more.

    I am no fanboy of any company I just put out my opinions & theories that are based off of the information we are given by the companies and as well as the media sites.
  • Maxiking - Tuesday, April 24, 2018 - link

    People never fail to amaze me, so you basically know nothing about the topic, yet you still managed to spit 4 paragraphs of mess, even made some "food for thought".

    Slower ram - performance regression unless you have big caches which is not the case of Intel nor AMD.
  • rocky12345 - Tuesday, April 24, 2018 - link

    It seems pretty basic to me as to what was said in the post. It is not my problem if you do not under stand what myself and some others have said about this topic. Pretty simple slower memory less bandwidth which in turn will give less performance in memory intensive work loads such as most games. ALl you have to do is go and look at some benches in the reviews to see AMD has the upper hand when it comes to memory bandwidth even Hardware Unboxed was pretty surprised by how good AMD's memory controller when compared to Intel's. Yes Intel's can run memory at higher speeds than AMD but even with that said AMD does just fine. You are right about cache sizes neither has a overly large cache but AMD 's is bigger on the desktop class CPU's and that is most likely one of the reasons their bandwidth for memory is slightly better.
  • Maxiking - Wednesday, April 25, 2018 - link

    The raw bandwidth doesn't matter, it's cas latency what makes the difference here.

    https://www.anandtech.com/show/11857/memory-scalin...

    https://imgur.com/MhqKfkf

    With CL16, it doesn't look that much impressive, is it.

    Now, lower the CL latencies to something more 2k18-ish, booom.

    https://www.eteknix.com/memory-speed-large-impact-...

    Another test

    https://www.pcper.com/reviews/Processors/Ryzen-Mem...

    Almost all the popular hw reviewers don't have a clue. They tell you to OC but do not explain why and what you should accomplish by overclocking. Imagine you have some bad hynix ram which can be barelly OC from 2666 to 3000mhz but you have to loose timing from CL15 for CL20 to get there.
  • mapesdhs - Monday, May 14, 2018 - link

    schlock, the chips were run at official spec. Or are you saying it's AMD's fault that Intel doesn't officially support faster speeds? :D Also, GN showed that subtimings have become rather important for AMD CPUs; some mbds left on Auto for subtimings will make very good selections for them, giving a measurable performance advantage.
  • peevee - Tuesday, April 24, 2018 - link

    It is April 24th, and the page on X470 still states: "Technically the details of the chipset are also covered by the April 19th embargo, so we cannot mention exactly what makes them different to the X370 platform until then."
  • jor5 - Tuesday, April 24, 2018 - link

    The review is a shambles. They've gone to ground.
  • coburn_c - Tuesday, April 24, 2018 - link

    I have been wanting to read their take on x470..
  • risa2000 - Wednesday, April 25, 2018 - link

    It is my favorite page too.
  • mpbello - Tuesday, April 24, 2018 - link

    Today phoronix is reporting that after AMD's newest AGESA update their 2700X system is showing 10+% improvement on a number of benchmarks. It is unknown if on Windows the impact will be the same. But you see how all the many variables could explain the differences.

Log in

Don't have an account? Sign up now