Shadow of Mordor

The next title in our testing is a battle of system performance with the open world action-adventure title, Middle Earth: Shadow of Mordor (SoM for short). Produced by Monolith and using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

A 2014 game is fairly old to be testing now, however SoM has a stable code and player base, and can still stress a PC down to the ones and zeroes. At the time, SoM was unique, offering a dynamic screen resolution setting allowing users to render at high resolutions that are then scaled down to the monitor. This form of natural oversampling was designed to let the user experience a truer vision of what the developers wanted, assuming you had the graphics hardware to power it but had a sub-4K monitor.

The title has an in-game benchmark, for which we run with an automated script implement the graphics settings, select the benchmark, and parse the frame-time output which is dumped on the drive. The graphics settings include standard options such as Graphical Quality, Lighting, Mesh, Motion Blur, Shadow Quality, Textures, Vegetation Range, Depth of Field, Transparency and Tessellation. There are standard presets as well.

We run the benchmark at 1080p and a native 4K, using our 4K monitors, at the Ultra preset. Results are averaged across four runs and we report the average frame rate, 99th percentile frame rate, and time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

CPU Gaming Performance: Ashes of the Singularity Escalation (1080p, 4K) CPU Gaming Performance: Rise of the Tomb Raider (1080p, 4K)
Comments Locked

347 Comments

View All Comments

  • NikosD - Sunday, August 13, 2017 - link

    Well, reading the whole review today - 13/08/2017 - I can see that the reviewer did something more evil than not using DDR4-3200 to give us performance numbers.

    He used DDR4-2400, as he clearly states in the configuration table, filling up the performance tables BUT in the power consumption page he added DDR4-3200 results (!) just to inform us that DDR4-3200 consumes 13W more, without providing any performance numbers for that memory speed (!!)

    The only thing left for the reviewer is to tell us in which department of Intel works exactly, because in the first pages he wanted to test TR against a 2P Intel system as Skylake-X has only 10C/20T but Intel didn't allow him.

    Ask for your Intel department to permit it next time.
  • Zingam - Sunday, August 13, 2017 - link

    Yeah! You make a great point! Too much emphasis on gaming all the time! These processors aren't GPUs after all! Most people who buy PCs don't play games at all. Even I as a game developer would like to see more real world tests, especially compilation and data-crunching tests that are typical for game content creation and development workloads. Even I as a game developer spend 99% of my time in front of the computer not playing any games.
  • pm9819 - Friday, August 18, 2017 - link

    So Intel made AMD release the underpowered overheating Bulldozer cpu's? Did Intel also make them sell there US and EU based fabs so they'll be wholly dependant on the Chinese to make their chips? Did Intel also make them buy a equally struggling graphics card company? Truth is AMD lost all the mind and market share they had because of bad corporate decision and uncompetitive cpu designs post Thunderbird. It's no one's fault but there own that it took seven years to produce a competitive replacement. Was Intel suppose to wait till they caught up? And Intel was a monopoly long before AMD started producing competitive cpu's.

    You can keep blaming Intel for AMD's screw ups but those of us with common sense and the ability to read know the fault lays with AMD's management.
  • ddriver - Thursday, August 10, 2017 - link

    You are not sampled because of your divine objectivity Ian, you are sampled because you review for a site that is still somewhat popular for its former glory. You can deny it all you want, and understandable, as it is part of your job, but AT is heavily biased towards the rich american boys - intel, apple, nvidia... You are definitely subtle enough for the dumdums, but for better or worse, we are not all dumdums yet.

    But hey, it is not all that bad, after all, nowadays there are scores of websites running reviews, so people have a base for comparison, and extrapolate objective results for themselves.
  • ddriver - Thursday, August 10, 2017 - link

    And some bits of constructive criticism - it would be nicer if those reviews featured more workloads people actually use in practice. Too much synthetics, too much short running tests, too much tests with software that is like "wtf is it and who in the world is using it".

    For example rendering - very few people in the industry actually use corona or blender, blender is used for modelling and texturing a lot, but not really for rendering. Neither is luxmark. Neither is povray, neither is CB.

    Most people who render stuff nowadays use 3d max and vray, so testing this will actually be indicative of actual, practical perforamnce to more people than all those other tests combined.

    Also, people want render times, not scores. That's very poor indication of actual performance that you will get, because many of those tests are short, so the CPU doesn't operate in the same mode it will operate if it sweats under continuous work.

    Another rendering test that would benefit prosumers is after effects. A lot of people use after effects, all the time.

    You also don't have a DAW test, something like cubase or studio one.

    A lot of the target market for HEDT is also interested in multiphysics, for example ansys or comsol.

    The compilation test you run, as already mentioned several times by different people, is not the most adequate either.

    Basically, this review has very low informational value for people who are actually likely to purchase TR.
  • mapesdhs - Thursday, August 10, 2017 - link

    AE would definitely be a good test for TR, it's something that can hammer an entire system, unlike games which only stress certain elements. I've seen AE renders grab 40GB RAM in seconds. A guy at Sony told me some of their renders can gobble 500GB of data just for a single frame, imposing astonishing I/O demands on their SAN and render nodes. Someone at a London movie company told me they use a 10GB/sec SAN to handle this sort of thing, and the issues surrounding memory access vs. cache vs. cores are very important, eg. their render management sw can disable cores where some types of render benefit from a larger slice of mem bw per core.

    There are all sorts of tasks which impose heavy I/O loads while also needing varying degrees of main CPU power. Some gobble enormous amounts of RAM, like ANSYS, though I don't know if that's still used.

    I'd be interested to know how threaded Sparks in Flame/Smoke behave with TR, though I guess that won't happen unless Autodesk/HP sort out the platform support.

    Ian.
  • Zingam - Sunday, August 13, 2017 - link

    Good points!
  • Notmyusualid - Sunday, August 13, 2017 - link

    ...only he WAS sampled. Read the review.
  • bongey - Thursday, August 10, 2017 - link

    You don't have to be paid by Intel, but this is just a bad review.
  • Gothmoth - Thursday, August 10, 2017 - link

    where is smoke there is fire.

    there are clear indications that anandtech is a bit biased.

Log in

Don't have an account? Sign up now