Shadow of Mordor

The next title in our testing is a battle of system performance with the open world action-adventure title, Middle Earth: Shadow of Mordor (SoM for short). Produced by Monolith and using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

A 2014 game is fairly old to be testing now, however SoM has a stable code and player base, and can still stress a PC down to the ones and zeroes. At the time, SoM was unique, offering a dynamic screen resolution setting allowing users to render at high resolutions that are then scaled down to the monitor. This form of natural oversampling was designed to let the user experience a truer vision of what the developers wanted, assuming you had the graphics hardware to power it but had a sub-4K monitor.

The title has an in-game benchmark, for which we run with an automated script implement the graphics settings, select the benchmark, and parse the frame-time output which is dumped on the drive. The graphics settings include standard options such as Graphical Quality, Lighting, Mesh, Motion Blur, Shadow Quality, Textures, Vegetation Range, Depth of Field, Transparency and Tessellation. There are standard presets as well.

We run the benchmark at 1080p and a native 4K, using our 4K monitors, at the Ultra preset. Results are averaged across four runs and we report the average frame rate, 99th percentile frame rate, and time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

CPU Gaming Performance: Ashes of the Singularity Escalation (1080p, 4K) CPU Gaming Performance: Rise of the Tomb Raider (1080p, 4K)
Comments Locked

347 Comments

View All Comments

  • mapesdhs - Friday, August 11, 2017 - link

    And consoles are on the verge of moving to many-cores main CPUs. The inevitable dev change will spill over into PC gaming.
  • RoboJ1M - Friday, August 11, 2017 - link

    On the verge?
    All major consoles have had a greater core count than consumer CPUs, not to mention complex memory architectures, since, what, 2005?
    One suspects the PC market has been benefiting from this for quite some time.
  • RoboJ1M - Friday, August 11, 2017 - link

    Specifically, the 360 had 3 general purpose CPU cores
    And the PS3 had one general purpose CPU core and 7 short pipeline coprocessors that could only read and write to their caches. They had to be fed by the CPU core.
    The 360 had unified program and graphics ram (still not common on PC!)
    As well as it's large high speed cache.
    The PS3 had septate program and video ram.
    The Xbox one and PS4 were super boring pcs in boxes. But they did have 8 core CPUs. The x1x is interesting. It's got unified ram that runs at ludicrous speed. Sadly it will only be used for running games in 1800p to 2160p at 30 to 60 FPS :(
  • mlambert890 - Saturday, August 12, 2017 - link

    Why do people constantly assume this is purely time/market economics?

    Not everything can *be* parallelized. Do people really not get that? It isn't just developers targeting a market. There are tasks that *can't be parallelized* because of the practical reality of dependencies. Executing ahead and out of order can only go so far before you have an inverse effect. Everyone could have 40 core CPUs... It doesn't mean that *gaming workloads* will be able to scale out that well.

    The work that lends itself best to parallelization is the rendering pipeline and that's already entirely on the GPU (which is already massively parallel)
  • Magichands8 - Thursday, August 10, 2017 - link

    I think what AMD did here though is fantastic. In my mind, creating a switch to change modes vastly adds to the value of the chip. I can now maximize performance based upon workload and software profile and that brings me closer to having the best of both worlds from one CPU.
  • Notmyusualid - Sunday, August 13, 2017 - link

    @ rtho782

    I agree it is a mess, and also, it is not AMDs fault.

    I've have a 14c/28t Broadwell chip for over a year now, and I cannot launch Tomb Raider with HT on, nor GTA5. But most s/w is indifferent to the amount of cores presented to them, it would seem to me.
  • BrokenCrayons - Thursday, August 10, 2017 - link

    Great review but the word "traditional" is used heavily. Given the short lifespan of computer parts and the nature of consumer electronics, I'd suggest that there isn't enough time or emotional attachment to establish a tradition of any sort. Motherboards sockets and market segments, for instance, might be better described in other ways unless it's becoming traditional in the review business to call older product designs traditional. :)
  • mkozakewich - Monday, August 14, 2017 - link

    Oh man, but we'll still gnash our teeth at our broken tech traditions!
  • lefty2 - Thursday, August 10, 2017 - link

    It's pretty useless measuring power alone. You need to measure efficiency (performance /watt).
    So yeah, a 16 core CPU draws more power than a 10 core, but it also probably doing a lot more work.
  • Diji1 - Thursday, August 10, 2017 - link

    Er why don't you just do it yourself, they've already given you the numbers.

Log in

Don't have an account? Sign up now