Middle Earth: Shadow of Mordor

Our next benchmark is Monolith’s popular open-world action game, Middle Earth: Shadow of Mordor. One of our current-gen console multiplatform titles, Shadow of Mordor is plenty punishing on its own, and at Ultra settings it absolutely devours VRAM, showcasing the knock-on effect that current-gen consoles have on VRAM requirements.

Shadow of Mordor - 3840x2160 - Ultra Quality

Shadow of Mordor - 3840x2160 - Very High Quality

Shadow of Mordor - 2560x1440 - Ultra Quality

Shadow of Mordor ends up being a big win for AMD, with the R9 Fury cards shooting well past the GTX 980. Based on our earlier R9 Fury X review this was not an unexpected result, but at the end of the day with a 20%+ performance advantage, it’s a great situation for AMD to be in.

Meanwhile the R9 Fury’s performance relative to its X-rated sibling is yet again in the 7% range. So far the performance difference between the two cards is surprisingly consistent.

Finally, since AMD’s last two $550 cards were the R9 290X and HD 7970, let’s take a look at those comparisons quickly. At 1440p the R9 Fury only has a 17% lead over the R9 290X “Uber”, which for a card almost 2 years old is more than a bit surprising. The R9 Fury has more efficient front-ends and back-ends and significant advantages in shader throughput and memory bandwidth, and yet the performance gains compared to 290X are fairly small. On the other hand 7970 owners looking to upgrade to another Radeon should like what they’re seeing, as the R9 Fury’s 79% performance advantage is approaching upgrade territory.

Shadow of Mordor - Min Frame Rate - 3840x2160 - Ultra Quality

Shadow of Mordor - Min Frame Rate - 3840x2160 - Very High Quality

Shadow of Mordor - Min Frame Rate - 2560x1440 - Ultra Quality

Shifting gears to minimum framerates, the situation is similarly in AMD’s favor at 4K. One of the outcomes of going up against the GTX 980 is that it’s just as VRAM-limited as R9 Fury is, so in a VRAM intensive game like Shadow of Mordor, neither card has an advantage. However it’s quite interesting that once we back off to 1440p, the GTX 980 surges forward.

Crysis 3 Civilization: Beyond Earth
Comments Locked

288 Comments

View All Comments

  • Midwayman - Friday, July 10, 2015 - link

    I'd love to see these two go at it again once dx12 games start showing up.
  • Mugur - Saturday, July 11, 2015 - link

    Bingo... :-). I bet the whole Fury lineup will gain a lot with DX12, especially the X2 part (4 + 4 GB won't equal 4 as in current CF). The are clearly CPU limited at this point.
  • squngy - Saturday, July 11, 2015 - link

    I don't know...

    Getting dx12 performance at the cost of dx11 performance sounds like a stupid idea this soon before dx12 games even come out.

    By the time a good amount of dx12 games come out there will probably be new graphics cards available.
  • thomascheng - Saturday, July 11, 2015 - link

    They will probably circle around and optimize things for 1080p and dx11, once dx12 and 4k is at a good place.
  • akamateau - Tuesday, July 14, 2015 - link

    DX12 games are out now. DX12 does not degrade DX11 performance. In fact Radeon 290x is 33% faster than 980 Ti in DX12. Fury X just CRUSHES ALL nVIDIA silicon with DX12 and there is a reason for it.

    Dx11 can ONLY feed data to the GPU serially and sequencially. Dx12 can feed data Asynchronously, the CPU send the data down the shader pipeline WHEN it is processed. Only AMD has this IP.
  • @DoUL - Sunday, July 19, 2015 - link

    Kindly provide link to a single DX12 game that is "out now".

    In every single review of the GTX 980 Ti there is this slide of DX12 feature set that the GTX 980 Ti supports and in that slide in all the reviews "Async Compute" is right there setting in the open, so I'm not really sure what do you mean by "Only AMD has this IP"!

    I'd strongly recommend that you hold your horses till DX12 games starts to roll out, and even then, don't forget the rocky start of DX11 titles!

    Regarding the comparison you're referring to, that guy is known for his obsession with mathematical calculations and synthetic benchmarking, given the differences between real-world applications and numbers based on mathematical calculations, you shouldn't be using/taking his numbers as a factual baseline for what to come.
  • @DoUL - Sunday, July 19, 2015 - link

    My Comment was intended as a reply to @akanateau
  • OldSchoolKiller1977 - Sunday, July 26, 2015 - link

    You are an idiotic person, wishful think and dreams don't make you correct. As stated please provide a link to these so called DX12 games and your wonderful "Fury X just CRUCHES ALL NVidia" statement.
  • Michael Bay - Sunday, July 12, 2015 - link

    As long as there is separate RAM in PCs, memory argument is moot, as contents are still copied and executed on in two places.
  • akamateau - Tuesday, July 14, 2015 - link

    Negative. Once Graphic data is processed and sent to the shaders it next goes to VRAM or video ram.

    System ram is what the CPU uses to process object draws. Once the objects are in the GPU pipes system ram is irrelevant.

    IN fact that is one of AMD's stacked memory patents. AMD will be putting HBM on APU's to not only act as CPU cache but HBM video ram as well. They have patents for programmable HBM using FPGA's and reconfigurable cache memory HBM as well.

    Stacked memory HBM can also be on the cpu package as a replacement for system ram. Can you imagine how your system would fly with 8-16gb of HBM instead of system ram?

Log in

Don't have an account? Sign up now