Rise of the Tomb Raider

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Where the old game had one benchmark scene, the new game has three different scenes with different requirements: Geothermal Valley (1-Valley), Prophet’s Tomb (2-Prophet) and Spine of the Mountain (3-Mountain) - and we test all three. These are three scenes designed to be taken from the game, but it has been noted that scenes like 2-Prophet shown in the benchmark can be the most CPU limited elements of that entire level, and the scene shown is only a small portion of that level. Because of this, we report the results for each scene on each graphics card separately.

Graphics options for RoTR are similar to other games in this type, offering some presets or allowing the user to configure texture quality, anisotropic filter levels, shadow quality, soft shadows, occlusion, depth of field, tessellation, reflections, foliage, bloom, and features like PureHair which updates on TressFX in the previous game.

Again, we test at 1920x1080 and 4K using our native 4K displays. At 1080p we run the High preset, while at 4K we use the Medium preset which still takes a sizable hit in frame rate.

It is worth noting that RoTR is a little different to our other benchmarks in that it keeps its graphics settings in the registry rather than a standard ini file, and unlike the previous TR game the benchmark cannot be called from the command-line. Nonetheless we scripted around these issues to automate the benchmark four times and parse the results. From the frame time data, we report the averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

ASRock RX 580 Performance

Rise of the Tomb Raider (1080p, Ultra)

Rise of the Tomb Raider (1080p, Ultra)

GPU Tests: Shadow of Mordor GPU Tests: Rocket League
Comments Locked

111 Comments

View All Comments

  • mkaibear - Tuesday, June 12, 2018 - link

    "Total flop"

    I suggest benchmarking the CPU in your phone against this CPU and try again.
  • SanX - Tuesday, June 12, 2018 - link

    They mostly serve different purposes and apps and have different TDP. But if you restrict consumption power of Intel processors to the same one of mobile processors then in the same apps it's not clear in advance which one will win.

    Time for ARM to look at the server and supercomputers markets.
  • iranterres - Monday, June 11, 2018 - link

    HAHA. Intel once again trying to fool some people and appeasing the fanboys with something worthless and expensive.
  • xchaotic - Tuesday, June 12, 2018 - link

    So are the regular i7-8600K unable to run all core 5GHz? If so, what't the max stable freq for a non-binned i7-8600K? Personally I went for an even lower/cheaper i5-8400 CPU, but I see why some people prefer to be running max speed all the time...
  • Rudde - Tuesday, June 12, 2018 - link

    I assume you mean the i7-8700k.
    There is a phenomenon called 'the silicon lottery.' Basically, when you buy an i7-8700k, you can't know the max stable frequency. It could max out at 5.2GHz or it could only reach 4.7GHz before going unstable. The thing is, you can't know what you'll end up with.
    This brings us to the i7-8068k. The i7-8068k is pretty much guaranteed to have a max stable frequency above 5GHz. Of course, this matters only when overclocking.
  • Bradyb00 - Tuesday, June 12, 2018 - link

    Is it a lower temp than a 8700k for a given multiplier though? i.e. both 8700k and 8086k at 46x which is cooler? 8700k obviously has to be averaged as not everyone is lucky with the silicon lottery.
    Presumption is the 8086k will run cooler on average due to the better binning.

    In which case I'm happy to pay more to save some degrees in my wee itx build
  • Lolimaster - Tuesday, June 12, 2018 - link

    Why not simply pick the Ryzen 5 2600, same thing with actual lower temps from using high quality solder...

    $189
  • TheinsanegamerN - Monday, June 18, 2018 - link

    Depends on the use case. For pure gaming, I'd stick with intel, which is a bit faster now and, if history is any indication, will hold up a LOT better for gaming in 5 years then the AMD chip will.

    Especially if you run games or emulators dependent on IPC (like PCSX2) the intel chip will perform a lot better then the AMD chip.

    There is also the memory controller. Ryzen 2000 improved, but intel's controller is still superior, and that matters for things like RTS games that consume memory bandwidth like black holes consume stars.
  • Stuka87 - Tuesday, June 12, 2018 - link

    Props to Asrock for providing the system so that you could get us stuff so quickly Ian. Not sure why everybody is complaining about the system and cooling that was used. The system was loaned to you so that you could get us numbers fast, which personally I am happy about. Thanks for your hard work Ian!
  • El Sama - Tuesday, June 12, 2018 - link

    This is quite the premium cost for a small increase in frequency that should be close to what you get to a 8700k OCed, an interesting offering regardless.

Log in

Don't have an account? Sign up now