Final Words

Bringing this review to a close, AMD has certainly thrown a great deal at us with the Radeon R9 Fury X. After the company’s stumble with their last single-GPU flagship, the Radeon R9 290X, they have reevaluated what they want to do, how they want to build their cards, and what kind of performance they want to aim for. As a result the R9 Fury X is an immensely interesting card.

From a technical perspective AMD has done a ton right here. The build quality is excellent, the load acoustic performance is unrivaled, the performance is great. Meanwhile although the ultimate value of High Bandwidth Memory to the buyer is only as great as the card’s performance, from a hobbyist perspective I am excited for what it means for future cards. The massive bandwidth improvements, the power savings, and the space savings have all done wonderful things for the R9 Fury X, and will do so for other cards in the future as well.

Compared to the R9 290X then, AMD has gone and done virtually everything they have needed to do in order to right what was wrong, and to compete with an NVIDIA energized by GTX Titan and the Maxwell architecture. As self-admittedly one of the harshest critics of the R9 290X and R9 290 due to the 290 series’ poor reference acoustic performance, I believe AMD has built an amazing card with the R9 Fury X. I dare say AMD finally “gets it” on card quality and so much more.

Had this card launched against the GTX Titan X a couple of months ago, where we would be today is talking about how AMD doesn’t quite dethrone the NVIDIA flagship, but instead how they serve as a massive spoiler, delivering so much of GTX Titan X’s performance for a fraction of the cost. But, unfortunately for AMD, this is not what has happened. The competition for the R9 Fury X is not an overpriced GTX Titan X, but a well-priced GTX 980 Ti, which to add insult to injury launched first, even though it was in all likelihood NVIDIA’s reaction to R9 Fury X.

The problem for AMD is that the R9 Fury X is only 90% of the way there, and without a price spoiler effect the R9 Fury X doesn’t go quite far enough. At 4K it trails the GTX 980 Ti by 4%, which is to say that AMD could not manage a strict tie or to take the lead. To be fair to AMD, a 4% difference in absolute terms is unlikely to matter in the long run, and for most practical purposes the R9 Fury X is a viable alternative to the GTX 980 Ti at 4K. None the less it does technically trail the GTX 980 Ti here, and that’s not the only issue that dogs such a capable card.

At 2560x1440 the card loses its status as a viable alternative. AMD’s performance deficit is over 10% at this point, and as we’ve seen in a couple of our games, AMD is hitting some very real CPU bottlenecking even on our high-end system. Absolute framerates are high enough that this only occurs at lower resolutions thanks to the high framerates these resolutions afford (and not a problem for 60Hz monitors), however at the same time AMD is also promoting 2560x1440@144Hz Freesync monitors, which these CPU bottlenecking issues greatly undercut.

The bigger issue, I suppose, is that while the R9 Fury X is very fast, I don’t feel we’ve reached the point where 4K gaming on a single GPU is the best way to go; too often we still need to cut back on image quality to reach playable framerates. 4K is arguably still the domain of multi-GPU setups, meanwhile cards like the R9 Fury X and GTX 980 Ti are excellent cards for 2560x1440 gaming, or even 1080p gaming for owners who want to take advantage of the image quality improvements from Virtual Super Resolution.

The last issue that dogs AMD here is VRAM capacity. At the end of the day first-generation HBM limits them to 4GB of VRAM, and while they’ve made a solid effort to work around the problem, there is only so much they can do. 4GB is enough right now, but I am concerned that R9 Fury X owners will run into VRAM capacity issues before the card is due for a replacement even under an accelerated 2 year replacement schedule.

Once you get to a straight-up comparison, the problem AMD faces is that the GTX 980 Ti is the safer bet. On average it performs better at every resolution, it has more VRAM, it consumes a bit less power, and NVIDIA’s drivers are lean enough that we aren’t seeing CPU bottlenecking that would impact owners of 144Hz displays. To that end the R9 Fury X is by no means a bad card – in fact it’s quite a good card – but NVIDIA struck first and struck with a slightly better card, and this is the situation AMD must face. At the end of the day one could do just fine with the R9 Fury X, it’s just not what I believe to be the best card at $649.

With that said, the R9 Fury X does have some advantages, that at least in comparing reference cards to reference cards, NVIDIA cannot touch, and these advantages give the R9 Fury X a great niche to reside in. The acoustic performance is absolutely amazing, and while it’s not enough to overcome some of the card’s other issues overall, if you absolutely must have the lowest load noise possible from a reference card, the R9 Fury X should easily impress you. I doubt that even the forthcoming R9 Nano can match what AMD has done with the R9 Fury X in this respect. Meanwhile, although the radiator does present its own challenges, the smaller size of the card should be a boon to small system builders who need something a bit different than standard 10.5” cards. Throw a couple of these into a Micro-ATX SFF PC, and it will be the PSU, not the video cards, that become your biggest concern.

Ultimately I believe AMD deserves every bit of credit they get for the R9 Fury X. They have put together a solid card that shows an impressive improvement over what they gave us 2 years ago with R9 290X. With that said, as someone who would like to see AMD succeed and prosper, the fact that they get so close only to be outmaneuvered by NVIDIA once again makes the current situation all the more painful; it’s one thing to lose to NVIDIA by feet, but to lose by inches only reminds you of just how close they got, how they almost upset NVIDIA. At the end of the day I think AMD can at least take home credit for forcing the GTX 980 Ti in to existence, which has benefitted the wider hobbyist community. Still, looking at AMD’s situation I can’t help but wonder what happens from here, as it seems like AMD badly needed a win they won’t quite get.

Finally, with the launch of the R9 Fury X behind us, it’s time to turn our gaze towards the future, the very near future. The R9 Fury X’s younger sibling, the R9 Fury, launches in 2 weeks. Though certainly slower by virtue of its cut-down Fiji GPU, it is also $100 cheaper, and is a more traditional air-cooled card design as well. With NVIDIA still selling the 4GB GTX 980 for $500, the playing field is going to be much different below the R9 Fury X, so I am curious to see just how things shape up on the 14th.

Overclocking
Comments Locked

458 Comments

View All Comments

  • TallestJon96 - Sunday, July 5, 2015 - link

    This card and the 980 ti meet two interesting milestones in my mind. First, this is the first time 1080p isn't even considered. Pretty cool to be at the point where 1080p is considered at bit of a low resolution for high end cards.

    Second, it's the point where we have single cards can play games at 4k, with higher graphical settings, and have better performance than a ps4. So at this point, if a ps4 is playable, than 4k gaming is playable.

    It's great to see higher and higher resolutions.
  • XtAzY - Sunday, July 5, 2015 - link

    Geez these benchies are making my 580 looking ancient.
  • MacGyver85 - Sunday, July 5, 2015 - link

    Idle power does not start things off especially well for the R9 Fury X, though it’s not too poor either. The 82W at the wall is a distinct increase over NVIDIA’s latest cards, and even the R9 290X. On the other hand the R9 Fury X has to run a CLLC rather than simple fans. Further complicating factors is the fact that the card idles at 300MHz for the core, but the memory doesn’t idle at all. HBM is meant to have rather low power consumption under load versus GDDR5, but one wonders just how that compares at idle.

    I'd like to see you guys post power consumption numbers with power to the pump cut at idle, to answer the questions you pose. I'm pretty sure the card is competitive without the pump running (but still with the fan to have an equal comparison). If not it will give us more of an insight in what improvements AMD can give to HBM in the future with regards to power consumption. But I'd be very suprised if they haven't dealt with that during the design phase. After all, power consumption is THE defining limit for graphics performance.
  • Oxford Guy - Sunday, July 5, 2015 - link

    Idle power consumption isn't the defining limit. The article already said that the cooler keeps the temperature low while also keeping noise levels in check. The result of keeping the temperature low is that AMD can more aggressively tune for performance per watt.
  • Oxford Guy - Sunday, July 5, 2015 - link

    This is a gaming card, not a card for casuals who spend most of their time with the GPU idling.
  • Oxford Guy - Sunday, July 5, 2015 - link

    The other point which wasn't really made in the article is that the idle noise is higher but consider how many GPUs exhaust their heat into the case. That means higher case fan noise which could cancel out the idle noise difference. This card's radiator can be set to exhaust directly out of the case.
  • mdriftmeyer - Sunday, July 5, 2015 - link

    It's an engineering card as much as it is for gaming. It's a great solid modeling card with OpenCL. The way AMD is building its driver foundation will pay off big in the next quarter.
  • Nagorak - Monday, July 6, 2015 - link

    I don't know that I agree about that. Even people who game a lot probably use their computer for other things and it sucks to be using more watts while idle. That being said, the increase is not a whole lot.
  • Oxford Guy - Thursday, July 9, 2015 - link

    Gaming is a luxury activity. People who are really concerned about power usage would, at the very least, stick with a low-wattage GPU like a 750 Ti or something and turn down the quality settings. Or, if you really want to be green, don't do 3D gaming at all.
  • MacGyver85 - Wednesday, July 15, 2015 - link

    That's not really true. I don't mind my gfx card pulling a lot of power while I'm gaming. But I want it to sip power when it's doing nothing. And since any card spends most of its time idling, idling is actually very important (if not most important) in overal (yearly) power consumption.

    Btw I never said that idle power consumption is the defining limit, I said power consumption is the defining limit. It's a give that any Watt you save while idling is generally a Watt of extra headroom when running at full power. The lower the baseline load the more room for actual, functional (graphics) power consumption. And as it turns out I was right in my assumption that the actual graphics card minus the cooler pump idle power consumption is competitive with nVidia's.

Log in

Don't have an account? Sign up now