Rise of the Tomb Raider (1080p, 4K)

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Where the old game had one benchmark scene, the new game has three different scenes with different requirements: Spine of the Mountain (1-Valley), Prophet’s Tomb (2-Prophet) and Geothermal Valley (3-Mountain) - and we test all three (and yes, I need to relabel them - I got them wrong when I set up the tests). These are three scenes designed to be taken from the game, but it has been noted that scenes like 2-Prophet shown in the benchmark can be the most CPU limited elements of that entire level, and the scene shown is only a small portion of that level. Because of this, we report the results for each scene on each graphics card separately.

 

Graphics options for RoTR are similar to other games in this type, offering some presets or allowing the user to configure texture quality, anisotropic filter levels, shadow quality, soft shadows, occlusion, depth of field, tessellation, reflections, foliage, bloom, and features like PureHair which updates on TressFX in the previous game.

Again, we test at 1920x1080 and 4K using our native 4K displays. At 1080p we run the High preset, while at 4K we use the Medium preset which still takes a sizable hit in frame rate.

It is worth noting that RoTR is a little different to our other benchmarks in that it keeps its graphics settings in the registry rather than a standard ini file, and unlike the previous TR game the benchmark cannot be called from the command-line. Nonetheless we scripted around these issues to automate the benchmark four times and parse the results. From the frame time data, we report the averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

#1 Geothermal Valley Spine of the Mountain

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

#2 Prophet’s Tomb

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

#3 Spine of the Mountain Geothermal Valley

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

The 4K

It's clear from these results that the 1950X is not the best gaming chip when in its default mode.

CPU Gaming Performance: Shadow of Mordor (1080p, 4K) CPU Gaming Performance: Rocket League (1080p, 4K)
Comments Locked

347 Comments

View All Comments

  • BOBOSTRUMF - Friday, August 11, 2017 - link

    Actually Intel's 140 can consume more than 210 if You want the top unrestricted performance limited. Read tomshardware review
  • Filiprino - Thursday, August 10, 2017 - link

    How comes WinRAR is faster with the 10 core Broadwell than with the 10 core Skylake?
    What did they change on Cinebench going from 10 to 11.5? Threadripper is the faster CPU in Cinebench 10, but in the newer one it is not. Then again Cinebench 15 shows TR as the faster CPU. Is this benchmark reliable?

    How comes Chromium compilation is so slow? Others have pointed out they get much better scaling (linear speedup). That makes sense because compilation basically consists in launching isolated processes (compiler instances). Is this related with the segfaulting problem under GNU/Linux systems?

    For encoding I would start to use FFmpeg when benchmarking so many cores. In my brain lies a memory of FFmpeg being faster than Handbrake for the same number of cores. Maybe the GUI loop interrupts the process in a performance-unfriendly way. Too much overhead. HPC workloads can suffer even from the network driver having too many interrupts (hence, Linux tickless configuration).

    I have read SYSMARK Results and I find strange that TR media results are slower than data, being TR slower than Intel in media and faster than Intel in data. Isn't SYSMARK from BAPCo (http://www.pcworld.com/article/3023373/hardware/am... You already point it out on the article, sorry.

    How comes R9 Fury in Shadow of Mordor has AMD and Intel CPUs running consistently at two different frame rates (~95 vs ~103)?

    The same but with the GTX 1080. Both cases happen regardless of the Intel architecture (Haswell, Broadwell and Skylake all have the same FPS value).

    What happens with NVIDIA driver on Rocket League? Bad caching algorithm (TR has more cores/threads -> more cache available to store GPU command data)? You say you had issues but, what are your thoughts?
    How comes GTA V has those Under 60 and 30 FPS graphs knowing that the game is available for PS4 and XBox One (it has been already optimized for two CCX CPU, at least there is a version for that case)? Nevertheless, with NVIDIA cards, 2 seconds out of 90 is not that much.

    What I can think is that all these benchmarks are programmed using threading libraries from the "good old times" due to bad scaling. And in some cases there is architecture-specific targeted code. I also include in my conception the small dataset being used. I also would not make a case out of a benchmark programmed with code having false sharing (¡:O!)

    Currently for gaming, it seems that the easiest way is to have a Virtual Machine with PCIe passthrough pinned to one of the MCM dies.

    As a suggestion to Anandtech, I would like to see more free (libre) software being used to measure CPU performance, compiling the benchmarks from source against the target CPU architecture. Something like Phoronix. Maybe you could use PTS (Phoronix Test Suite).
  • Filiprino - Thursday, August 10, 2017 - link

    Positive things: ThreadRipper is under its TDP consumption. Intel is more power hungry. The Intel 16-core might go through the rough in power consumption.
    Good gaming performance. Intel is generally better, but TR still offers a beefy CPU for that too, losing a few frames only.
    Strong rendering performance.
    Strong video encoding performance.

    When you talk about IPC, it would be useful to measure it with profiling tools, not just getting "points", "miliseconds" and "seconds".
    Seeing how these benchmarks do not scale by much beyond 10 cores you might realize software has to get better.
  • Chad - Thursday, August 10, 2017 - link

    Second ffmpeg test (pretty please!)
  • mapesdhs - Thursday, August 10, 2017 - link


    Ian, a query about the CPU Legacy Tests: why do you reckon does the 1920X beat both 1950X and 1950X-G for CB 11.5 MT, yet the latter win out for CB 10 MT? Is there a max-thread limit in V11.5? Filiprino asked much the same above.

    "...and so losing half the threads in Game Mode might actually be a detriment to a workstation implementation."

    Isn't that the whole point though? For most workstation tasks, don't use Game Mode. There will be exceptions of course, but in general...

    Btw, where's C-ray? ;)

    Ian.
  • Da W - Thursday, August 10, 2017 - link

    ALL OF YOU COMPLAINERS: START A TECH REVIEW WEBSITE YOURSELVES AND STFU!
  • hansmuff - Thursday, August 10, 2017 - link

    Don't read the comments. Also, a lot of the "complaints" are read by Ryan and he actually addresses them and his articles improve as a result of criticism. He's never been bad, but you can see an ascension in quality over time, along with his partaking in critical commentary.
    IOW, we don't really need a referee.
  • hansmuff - Thursday, August 10, 2017 - link

    And of course I mean Ian, not Ryan.
  • mapesdhs - Friday, August 11, 2017 - link

    It is great that he replies at all, and does so to quite a lot of the posts too.
  • Kepe - Thursday, August 10, 2017 - link

    Wait a second, according to AMD and all the other articles about the 1950X and Game Mode, game mode disables all the physical cores of one of the CPU clusters and leaves SMT on, so you get 8 cores and 16 threads. It doesn't just turn off SMT for a 16 core / 16 thread setup.

    AMD's info here: https://community.amd.com/community/gaming/blog/20...

Log in

Don't have an account? Sign up now