Middle Earth: Shadow of Mordor

Our next benchmark is Monolith’s popular open-world action game, Middle Earth: Shadow of Mordor. One of our current-gen console multiplatform titles, Shadow of Mordor is plenty punishing on its own, and at Ultra settings it absolutely devours VRAM, showcasing the knock-on effect that current-gen consoles have on VRAM requirements.

Shadow of Mordor - 3840x2160 - Ultra Quality

Shadow of Mordor - 3840x2160 - Very High Quality

Shadow of Mordor - 2560x1440 - Ultra Quality

Shadow of Mordor ends up being a big win for AMD, with the R9 Fury cards shooting well past the GTX 980. Based on our earlier R9 Fury X review this was not an unexpected result, but at the end of the day with a 20%+ performance advantage, it’s a great situation for AMD to be in.

Meanwhile the R9 Fury’s performance relative to its X-rated sibling is yet again in the 7% range. So far the performance difference between the two cards is surprisingly consistent.

Finally, since AMD’s last two $550 cards were the R9 290X and HD 7970, let’s take a look at those comparisons quickly. At 1440p the R9 Fury only has a 17% lead over the R9 290X “Uber”, which for a card almost 2 years old is more than a bit surprising. The R9 Fury has more efficient front-ends and back-ends and significant advantages in shader throughput and memory bandwidth, and yet the performance gains compared to 290X are fairly small. On the other hand 7970 owners looking to upgrade to another Radeon should like what they’re seeing, as the R9 Fury’s 79% performance advantage is approaching upgrade territory.

Shadow of Mordor - Min Frame Rate - 3840x2160 - Ultra Quality

Shadow of Mordor - Min Frame Rate - 3840x2160 - Very High Quality

Shadow of Mordor - Min Frame Rate - 2560x1440 - Ultra Quality

Shifting gears to minimum framerates, the situation is similarly in AMD’s favor at 4K. One of the outcomes of going up against the GTX 980 is that it’s just as VRAM-limited as R9 Fury is, so in a VRAM intensive game like Shadow of Mordor, neither card has an advantage. However it’s quite interesting that once we back off to 1440p, the GTX 980 surges forward.

Crysis 3 Civilization: Beyond Earth
Comments Locked

288 Comments

View All Comments

  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    Oh, gee, forgot, it's not amd's fault ... it was "developers and access" which is not amd's fault, either... of course...

    OMFG
  • redraider89 - Monday, July 20, 2015 - link

    What's your excuse for being such an idiotic, despicable and ugly intel/nvidia fanboy? I don't know, maybe your parents? Somewhere you went wrong.
  • OldSchoolKiller1977 - Sunday, July 26, 2015 - link

    I am sorry and NVIDIA fan boys resort to name calling.... what was it that you said and I quote "Hypocrite" :)
  • redraider89 - Monday, July 20, 2015 - link

    Your problem is deeper than just that you like intel/nvidia since you apparently hate people who don't like those, and ONLY because they like something different than you do.
  • ant6n - Saturday, July 11, 2015 - link

    A third way to look at it is that maybe AMD did it right.

    Let's say the chip is built from 80% stream processors (by area), the most redundant elements. If some of those functional elements fail during manufacture, they can disable them and sell it as the cheaper card. If something in the other 20% of the chip fails, the whole chip may be garbage. So basically you want a card such that if all the stream processors are functional, the other 20% become the bottleneck, whereas if some of the stream processors fail and they have to sell it as a simple Fury, then the stream processors become the bottleneck.
  • thomascheng - Saturday, July 11, 2015 - link

    That is probably AMD's smart play. Fury was always the intended card. Perfect cards will be the X and perhaps less perfect card will be the Nano.
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    "fury was always the intended card"
    ROFL
    amd fanboy out much ?
    I mean it is unbelievable, what you said, and that you said it.
  • theduckofdeath - Friday, July 24, 2015 - link

    Just shut up, Bubby.
  • akamateau - Tuesday, July 14, 2015 - link

    Anand has been running DX12 benchmarks last spring. When they compared Radeon 290x to GTX 980 Ti nVidia ordered them to stop. That is why no more DX12 benchmarks have been run.

    Intel and nVidia are at a huge disadvantage with DX12 and Mantle.

    The reason:

    AMD IP: Asynchronous Shader Pipelines and Asynchronous Compute Engines.
  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    We saw mantle benchmarks so your fantasy is a bad amd fanboy delusion.

Log in

Don't have an account? Sign up now