The AMD Radeon R9 Fury X Review: Aiming For the Top
by Ryan Smith on July 2, 2015 11:15 AM ESTMiddle Earth: Shadow of Mordor
Our next benchmark is Monolith’s popular open-world action game, Middle Earth: Shadow of Mordor. One of our current-gen console multiplatform titles, Shadow of Mordor is plenty punishing on its own, and at Ultra settings it absolutely devours VRAM, showcasing the knock-on effect of current-gen consoles have on VRAM requirements.
With Shadow of Mordor things finally start looking up for AMD, as the R9 Fury X scores its first win. Okay, it’s more of a tie than a win, but it’s farther than the R9 Fury X has made it so far.
At 4K with Ultra settings the R9 Fury X manages an average of 48.3fps, a virtual tie with the GTX 980 Ti and its 47.9fps. Dropping down to Very High quality does see AMD pull back just a bit, but with a difference between the two cards of just 0.7fps, it’s hardly worth worrying about. Even 2560 looks good for AMD here, trailing the GTX 980 Ti by just over 1fps, at an average framerate of over 80fps. Overall the R9 Fury X delivers 98% to 101% of the performance of the GTX 980 Ti, more or less tying the direct competitor to AMD’s latest card.
Meanwhile compared to the R9 290X, the R9 Fury X doesn’t see quite the same gains. Performance is a fairly consistent 26-28% ahead of the R9 290X, less than what we’ve seen elsewhere. Earlier we discussed how the R9 Fury X’s performance gains will depend on which part of the GPU is getting stressed the most; tasks that stress the shaders show the most gains, and tasks that stress geometry or the ROPs potentially show the lowest gains. In the case of SoM, I believe we’re seeing at least a partial case of being geometry/ROP influenced.
Unfortunately for AMD, the minimum framerate situation isn’t quite as good as the averages. These framerates aren’t bad – the R9 Fury X is always over 30fps – but even accounting for the higher variability of minimum framerates, they’re trailing the GTX 980 Ti by 13-15% with Ultra quality settings. Interestingly at 4K with Very High quality settings the minimum framerate gap is just 3%, in which case what we are most likely seeing is the impact of running Ultra settings with only 4GB of VRAM. The 4GB cards don’t get punished too much for it, but for R9 Fury X and its 4GB of HBM, it is beginning to crack under the pressure of what is admittedly one of our more VRAM-demanding games.
458 Comments
View All Comments
bennyg - Saturday, July 4, 2015 - link
Marketing performance. Exactly.Except efficiency was not good enough across the generations of 28nm GCN in an era where efficiency + thermal/power limits constrain performance, and look what Nvidia did over a similar era from Fermi (which was at market when GCN 1.0 was released) to Kepler to Maxwell. Plus efficiency is kind of the ultimate marketing buzzword in all areas of tech and not having any ability to mention it (plus having generally inferor products) hamstrung their marketing all along
xenol - Monday, July 6, 2015 - link
Efficiency is important because of three things:1. If your TDP is through the rough, you'll have issues with your cooling setup. Any time you introduce a bigger cooling setup because your cards run that hot, you're going to be mocked for it and people are going to be weary of it. With 22nm or 20nm nowhere in sight for GPUs, efficiency had to be a priority, otherwise you're going to ship cards that take up three slots or ship with water coolers.
2. You also can't just play to the desktop market. Laptops are still the preferred computing platform and even if people are going for a desktop, AIOs are looking much more appealing than a monitor/tower combo. So you want to have any shot in either market, you have to build an efficient chip. And you have to convince people they "need" this chip, because Intel's iGPUs do what most people want just fine anyway.
3. Businesses and such with "always on" computers would like it if their computers ate less power. Even if you can save a handful of watts, multiplying that by thousands and they add up to an appreciable amount of savings.
xenol - Monday, July 6, 2015 - link
(Also by "computing platform" I mean the platform people choose when they want a computer)medi03 - Sunday, July 5, 2015 - link
ATI is the reason both Microsoft and Sony use AMDs APUs to power their consoles.It might be the reason why APUs even exist.
tipoo - Thursday, July 2, 2015 - link
That was then, this is now. Now, AMD together with the acquisition, has a lower market cap than Nvidia.Murloc - Thursday, July 2, 2015 - link
yeah, no.ddriver - Thursday, July 2, 2015 - link
ATI wasn't bigger, AMD just paid a preposterous and entirely unrealistic amount of money for it. Soon after the merger, AMD + ATI was worth less than what they paid for the latter, ultimately leading to the loss of its foundries, putting it in an even worse position. Let's face it, AMD was, and historically has always been betrayed, its sole purpose is to create the illusion of competition so that the big boys don't look bad for running unopposed, even if this is what happens in practice.Just when AMD got lucky with Athlon a mole was sent to make sure AMD stays down.
testbug00 - Sunday, July 5, 2015 - link
foundries didn't go because AMD bought ATI. That might have accelerated it by a few years however.Foundry issue and cost to AMD dates back to the 1990's and 2000-2001.
5150Joker - Thursday, July 2, 2015 - link
True, AMD was at a much better position in 2006 vs NVIDIA, they just got owned.3DVagabond - Friday, July 3, 2015 - link
When was Intel the underdog? Because that's who's knocked them down (The aren't out yet.).