Middle-Earth: Shadow of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark, taking results as the average and minimum frame rates. Minimum frame rate results can be found in Bench.

For this test we used the following settings with our graphics cards:

Shadow of Mordor Settings
  Resolution Quality
Low GPU Integrated Graphics 1280x720 Low
ASUS R7 240 1GB DDR3
Medium GPU MSI GTX 770 Lightning 2GB 1920x1080 Ultra
MSI R9 285 Gaming 2G
High GPU ASUS GTX 980 Strix 4GB 1920x1080
3840x2160
Ultra
Ultra
MSI R9 290X Gaming 4G

Integrated Graphics

Shadow of Mordor on Integrated Graphics

As with the other IGP tests, the APU solution gets significantly better results.

Discrete Graphics

Shadow of Mordor on ASUS R7 240 DDR3 2GB ($70)

Shadow of Mordor on MSI GTX 770 Lightning 2GB ($245)

Shadow of Mordor on MSI R9 290X Gaming LE 4GB ($380)Shadow of Mordor on MSI R9 290X Gaming LE 4GB ($380)

Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)

SoM is our most CPU agnostic benchmark of the set, such that as you increase the GPU power and the resolution, the CPU matters less to the performance. This is why at 4K Ultra, with both the AMD and NVIDIA discrete GPUs, the $70 CPU from AMD is within 2-3% for average frame rates.

However, it should be noted that the CPU power matters more when (a) an AMD discrete GPU is being used, or (b) lower resolutions. In both cases, the AMD FX CPUs are more likely to match up with Intel's Core i3, which sit at the top of the pack.

Gaming Comparison: Grid Autosport The Skylake Core i3 (51W) Review: Conclusion
Comments Locked

94 Comments

View All Comments

  • tipoo - Monday, August 8, 2016 - link

    Looks like even a Skylake i3 may be able to retire the venerable 2400/2500K, higher frame rates and better frame times at that. However a native quad does prevent larger dips.
  • Kevin G - Monday, August 8, 2016 - link

    I have a feeling much that is due to the higher base clock on the SkyLake i3 vs. the i5 2500K. Skylake's IPC improvements also help boost performance here too.

    The real challenge is if the i3 6320 can best the i5 2500k as the same 3.9 Ghz base clock speed. Sandy Bridge was a good overclocker so hitting those figures shouldn't be difficult at all.
  • tipoo - Monday, August 8, 2016 - link

    That's true, overclocked the difference would diminish. But you also get modernities like high clocked DDR4 in the switchover.

    At any rate, funny that a dual core i3 can now fluidly run just about everything, it's two cores are probably faster than the 8 in the current consoles.
  • Lolimaster - Monday, August 8, 2016 - link

    Benchrmarks don't tell you about the hiccups when playing with a dual core. Specially with things like Crysis 3 or even worse ROt Tomb Raider where you get like half the fps just by using a dual core bs a cheapo Athlon 860K.
  • gamerk2 - Monday, August 8, 2016 - link

    That's why Frame Times are also measured, which catches those hitches.
  • Samus - Tuesday, August 9, 2016 - link

    I had a lot of issues with my Sandy Bridge i3-2125 in Battlefield 3 circa 2011 with lag and poor minimum frame rates.

    After long discussions on the forums, it was determined disabling hyper threading actually improved frame rate consistency. So at least in the Sandy Bridge IPC, and probably dating back to Nehalem or even Prescott, Jackson Technology or whatever you want to call it, has a habit of stalling the pipeline if there are too many cache misses to complete the instruction. Obviously more cache resolves this, so the issue isn't as prominent on the i7's, and it would certainly explain why the 4MB i3's are more consistent performers than the 3MB variety.

    Of course the only way to prove if hyper threading is causing performance inconsistency is to disable it. It'd be a damn unique investigation for Anandtech to do a IPC improvement impact on it's affect on hyper-threading performance over the years, perhaps even dating back to the P4.
  • AndrewJacksonZA - Wednesday, August 10, 2016 - link

    HOW ON EARTH DID I MISS THIS?!?!

    Thank you for introducing me to Intel's tech known as "Jackson!" This is now *SO* on my "To Buy" list!

    Thank you Samus! :-D
  • bug77 - Monday, August 8, 2016 - link

    Neah, I went i5-2500k -> i5-6600k and there's no noticeable difference. The best part of the upgrade was those new I/O ports on the new motherboard, but it's a sad day when you upgrade after 4 years and the most you have to show is you new M2 or USB 3.1 ports (and USB 3.1 is only added through a 3rd party chip).
    Sure, if I bench it, the new i5 is faster, but since the old i5 wasn't exactly slow, I can't say that I see a significant improvement.

    Now, if you mean that instead of getting an i5-2500k one can now look at a Skylake i3, I'm not going to argue with you there. Though (money permitting) the boost speed might be nice to have anyway.
  • Cellar Door - Monday, August 8, 2016 - link

    This is a poorly educated comment:

    a) Your perceived speed might be limited by your storage
    b) You don't utilize your cpu's multitasking abilities fully(all cores)
  • Duckeenie - Monday, August 8, 2016 - link

    Why did you continue to post your comment if you believed you were making poorly educated points?

Log in

Don't have an account? Sign up now