Middle Earth: Shadow of Mordor

Our next benchmark is Monolith’s popular open-world action game, Middle Earth: Shadow of Mordor. One of our current-gen console multiplatform titles, Shadow of Mordor is plenty punishing on its own, and at Ultra settings it absolutely devours VRAM, showcasing the knock-on effect of current-gen consoles have on VRAM requirements.

Shadow of Mordor - 3840x2160 - Ultra Quality

Shadow of Mordor - 3840x2160 - Very High Quality

Shadow of Mordor - 2560x1440 - Ultra Quality

Once again even GTX Titan X won’t be enough for 60fps at 4K, but at 48.9fps it’s closer to 60fps than 30fps, representing a significant improvement in 4K performance in only a generation. Compared to the GTX 980 and NVIDIA’s other cards the GTX Titan X is once more in a comfortable lead, overtaking its smaller sibling by around 33% and the older GK110 cards at 45-60%.

Turning down the game’s quality settings to Very High does improve performance a bit, but at 54.1fps it’s still not quite enough for 60fps. The biggest advantage of Very High quality is alleviating some of the high VRAM requirements, something the GTX Titan cards don’t suffer from in the first place. Otherwise dropping to 1440p will give us a significant bump in performance, pushing framerates over 80fps once again.

Shadow of Mordor - Min Frame Rate - 3840x2160 - Ultra Quality

Shadow of Mordor - Min Frame Rate - 3840x2160 - Very High Quality

Shadow of Mordor - Min Frame Rate - 2560x1440 - Ultra Quality

Meanwhile the game’s minimum framerate further elaborates on the performance hit from the game’s high VRAM usage at Ultra quality. 3GB cards collapse here, leaving the 4GB cards and the 6GB original Titan much higher in our charts. Multi-GPU performance also struggles here, even with 4GB cards, reminding us that while multi-GPU setups can be potent, they do introduce performance consistency issues that single-GPU cards can avoid.

Crysis 3 Civilization: Beyond Earth
Comments Locked

276 Comments

View All Comments

  • Kevin G - Tuesday, March 17, 2015 - link

    Last I checked, rectal limits are a bit north of 700 mm^2. However, nVidia is already in the crazy realm in terms of economics when it comes to supply/demand/yields/cost. Getting fully functional chips with die sizes over 600 mm^2 isn't easy. Then again, it isn't easy putting down $999 USD for a graphics card.

    However, harvested parts should be quiet plentiful and the retail price of such a card should be appropriately lower.
  • Michael Bay - Wednesday, March 18, 2015 - link

    >rectal limits are a bit north of 700 mm^2

    Oh wow.
  • Kevin G - Wednesday, March 18, 2015 - link

    @Michael Bay

    Intel's limit is supposed to be between 750 and 800 mm^2. They have released a 699 mm^2 product commercially (Tukwilla Itanium 2) a few years ago so it can be done.
  • Michael Bay - Wednesday, March 18, 2015 - link

    >rectal limits
  • D. Lister - Wednesday, March 18, 2015 - link

    lol
  • chizow - Tuesday, March 17, 2015 - link

    Yes its clear Nvidia had to make sacrifices somewhere to maintain advancements on 28nm and it looks like FP64/DP got the cut. I'm fine with it though, at least on GeForce products I don't want to pay a penny more for non-gaming products, if someone wants dedicated compute, go Tesla/Quadro.
  • Yojimbo - Tuesday, March 17, 2015 - link

    Kepler also has dedicated FP64 cores and from what I see in Anandtech articles, those cores are not used for FP32 calculations. How does NVIDIA save power with Maxwell by leaving FP64 cores off the die? The Maxwell GPUs seem to still be FP64 capable with respect to the number of FP64 cores placed on the die. It seems what they save by having less FP64 cores is die space and, as a result, the ability to have more FP32 cores. In other words, I haven't seen any information about Maxwell that leads me to believe they couldn't have added more FP64 cores when designing GM200 to make a GPU with superior double precision performance and inferior single precision performance compared with the configuration they actually chose for GM200. Maybe they just judged single precision performance to be more important to focus on than double precision, with a performance boost for double precision users having to wait until Pascal is released. Perhaps it was a choice between making a modest performance boost for both single and double precision calculations or making a significant performance boost for single precision calculations by forgoing double precision. Maybe they thought the efficiency gain of Maxwell could not carry sales on its own.
  • testbug00 - Tuesday, March 17, 2015 - link

    If this is a 250W card using about the same power as the 290x under gaming load, what does that make the 290x?
  • Creig - Tuesday, March 17, 2015 - link

    A very good value for the money.
  • shing3232 - Tuesday, March 17, 2015 - link

    Agree.

Log in

Don't have an account? Sign up now