Meet The Radeon R9 Fury X

Right off the bat it is safe to say that we have not seen a high-end card quite like the R9 Fury X before. We have seen small cards before, we have seen liquid cooled cards before, and we have seen well-crafted cards before. But none of those attributes have come together before in a single card like they have with the R9 Fury X. AMD has spared no expense on their new single-GPU flagship and it shows.

If we had to describe the R9 Fury X in a nutshell, the only thing that would do it any justice is a very small card with a very big radiator. Liquid cooling is not by any means new (see: R9 295X2), but thanks to the space savings of HBM there hasn’t been a card before that has paired up a small board with a large radiator in this fashion. It is, arguably, a card designed for an entirely different paradigm than the dual-slot air cooled cards we have come to know over the last decade.

But first, let’s talk about build quality and naming. To cut right to the case, NVIDIA’s initial GTX Titan project saw a large and unexpected amount of success in 2013. Named in honor of the Titan supercomputer, NVIDIA’s first major supercomputer win for their Tesla business, the GTX Titan was created to serve as a new high-end card line for the company. Harkening back to the Titan supercomputer, the GTX Titan created for NVIDIA what amounted to a luxury category for a video card, priced at an unprecedented $1000, but at the same time introducing a level of build quality and performance for a blower-type air cooler that is unmatched to this day. By all reports the GTX Titan handily exceeded NVIDIA’s expectations, and though expensive, established the viability of a higher quality video card.

With the R9 Fury X, AMD is looking to establish the same kind of luxury brand and image for their products. Similar to the Titan, AMD has dropped the number in the product name (though formally we still consider it part of the 300 series), opting instead to brand the product Fury. AMD has not been very clear on the rationale for the Fury name, and while they do not have a high-profile supercomputer to draw a name from, they do have other sources. Depending on what you want to believe the name is either a throwback to AMD’s pre-Radeon (late 1990s) video card lineup, the ATI Rage Fury family. Alternatively, in Greek mythology the Furies were deities of vengeance who had an interesting relationship with the Greek Titans that is completely unsuitable for publication in an all-ages technical magazine, and as such the Fury name may be a dig at NVIDIA’s Titan branding.

In any case, what isn’t in doubt is the quality of the R9 Fury X. AMD has opted to build a card to a similar luxury standard as the GTX Titan, with a focus on both the card and somewhat disembodied radiator. For R9 Fury X AMD has gone metal, and the card proper is composed of a multi-piece die-cast aluminum body that is built around the card’s PCB. While you won’t find any polycarbonate windows here (the pump isn’t exactly eye-catching) what you will find is soft-touch rubber along the front face of the card and its edges. Meanwhile on the back side of the card there is another soft-touch panel to serve as a backplate. Since the card lacks a fan of any kind, there are no cooling concerns here; with or without a backplate, putting R9 Fury X cards side-by-side doesn’t impact cooling in any meaningful manner.

Ultimately at this point AMD and NVIDIA are basically taking cues from the cell phone industry on design, and the end result is a card that looks good, and yes, even feels good in-hand. The R9 Fury X has a very distinctive silhouette to it that should be easily noticeable in any open or windowed case, and in case that fails, you would be hard-pressed to miss the lighting. The card features multiple LED-lit features; the first is the Radeon logo, which lights up red when the card is powered on, and meanwhile directly next to the PCIe power connectors are a bank of LEDs. The LEDs are designed to indicate the load on the card, with 8 LEDs for load, and a final, 9th LED that indicates whether the card has gone until AMD’s ultra-deep sleep ZeroCore Power mode. The 8 load LEDs can even be configured for red or blue operation to color-coordinate with the rest of a case; the only color you can’t select is green, in theory because green is reserved for the sleep LED, though in practice I think it’s safe to point out that green is not AMD’s color…

Moving on, let’s talk about the R9 Fury X’s cooling apparatus, its closed loop liquid cooling system and its rather large radiator. One of AMD’s few success stories in 2014 was the R9 295X2, the company’s dual GPU Hawaii card. After numerous struggles with air cooling for both single and dual GPU cards – R9 290X, Radeon HD 6990, Radeon HD 7990 – and facing a 500W TDP for their Hawaii card, AMD ditched straight air cooling in favor of a closed loop liquid cooler. Though the cooler AMD used was pushed to its limit by the high TDP of the R9 295X2, at the end of the day it did its job and did it well, effectively cooling the card while delivering acoustic performance simply unheard of for a reference dual GPU card.

For the R9 Fury X AMD needs to dissipate over 275W of heat. AMD’s reference cooler for the R9 290X was simply not very good, and even NVIDIA would be hard pressed to dissipate that much heat, as their Titan cooler is optimized around 250W (and things quickly get worse when pushed too far past that). As a result AMD opted to once again go the closed loop liquid cooling (CLLC) route for the R9 Fury X, taking what they learned from the R9 295X2 and improving upon it.

The end result is a bigger, better CLLC that is the only source of cooling for the card. The R9 295X2 used a fan for the VRMs and VRAM, but for the R9 Fury X AMD has slaved everything to the CLLC. The CLLC in turn is even bigger than before; it’s still based around a 120mm radiator, but the combination of radiator and fan is now 60mm thick, and in all honesty the thickest radiator we expect would fit in our closed case testbed. As a result the radiator has an immense 500W rated cooling capability, far outstripping the card’s 275W TBP, and without a doubt making it overkill for the R9 Fury X.

Cracking open the R9 Fury X’s housing to check out the pump block shows us that is built by Cooler Master. Furthermore in order to have the CLLC cool the GPU and supporting discrete components, there is an interesting metal pipe in the loop, which serves as a liquid cooled heatpipe of sorts to draw heat away from the MOSFETs used in the card’s VRM setup. Otherwise the GPU and the HBM modules situated next to it are covered by the pump block itself.

AMD’s official rating for the R9 Fury X’s cooling apparatus is that it should keep the card at 50C while playing games. In practice what we have found is that in our closed case test bed the GPU temperature gets up to 65C, at which point the CLLC fan ramps up very slightly (about another 100RPM) and reaches equilibrium. The low operating temperature of the Fiji GPU is not only a feather in AMD’s cap, but is an important part of the design of the card, as the low temperature keeps power consumption down and improves AMD’s overall energy efficiency.

Meanwhile from an operational standpoint, until the R9 295X2, where the CLLC acted independently based on the temperature of the liquid, the R9 Fury X’s CLLC is slaved into the fan controls for the card. As a result it’s possible for the card (and users) to directly control the fan speed based on GPU/VRM temperatures. The independent CLLC in the R9 295X2 was not a direct problem for that card, but with the CLLC of the R9 Fury X now responsible for the VRMs as well, letting the card control the fan is a must.

Overall the acoustic performance of the R9 Fury X is unprecedented for a high-end card, as we’ll see in our benchmark section. Unfortunately the CLLC does have one drawback, and that is idle noise. Just as with the R9 295X2, there’s no such thing as a free lunch when it comes to moving around coolant, and as a result the pump makes more noise at idle than what you’d find on an air cooled card, be it blower or open air.

Moving on, let’s get back to the card itself. The R9 Fury X ships with an official Typical Board Power (TBP of 275W, this intended to represent the amount of power it will consume during the average gaming session. That said, the power delivery circuitry of the card is designed to deliver quite a bit more energy than that, as the card features a pair of 8-pin PCIe power sockets and has an official power delivery rating of 375W. And although AMD doesn’t specify a board limit in watts, based on empirical testing we were able to get up to an estimated 330W. As a result the card has plenty of power and thermal headroom if desired.

Meanwhile the card’s VRM setup is a 6-phase design. AMD tells us that these VRMs can handle up to 400A, no doubt helped by the liquid cooling taking place. AMD’s out of the box overclocking support is limited – no voltage and no HBM clockspeed controls, only the power limit and GPU clockspeed – but it’s clear that this card was built to take some significant overclocking. To that end there is a dual BIOS switch present that can switch between a programmable BIOS and a fixed reference BIOS, and I can only imagine that AMD is expecting hardcore overclockers to look into BIOS modification to gain the necessary power and voltage controls for more extreme overclocking.

Taking a look at the naked PCB, it’s remarkable how small it is. The PCB measures just 7.5” in length, 3” shorter than the standard 10.5” found on most high-end reference cards. This smaller size is primarily enabled through the use of HBM, which brings the card’s VRAM on-chip and uses relatively tiny stacks of memory as opposed to the large cluster of 16 GDDR5 chips required for R9 290X. HBM also reduces the number of discrete components required for the power delivery system, as HBM has much simpler power delivery requirements. The end result is that a good deal of the board is on the Fiji chip itself, with limited amounts of supporting circuitry forming the rest of the board.


Top: R9 Fury X. Bottom: R9 390X/290X

To show this off, AMD let us take pictures of a bare R9 Fury X PCB next to an R9 290X PCB, and the difference is simply staggering. As a result the R9 Fury X is going to enjoy an interesting niche as a compact, high-performance card. The large radiator does invite certain challenges, but I expect OEM system builders are going to have a fun time designing some Micro-ATX sized SFF PCs around this card.

Switching gears, the Display I/O situation is an interesting one. Without a fan on the card itself, there is no need for vents on the I/O bracket, and as a result we get a solid metal bracket with holes punched out for the card’s 4 display I/O ports. Here we find 3x DisplayPort 1.2a ports, and a single HDMI 1.4a port. What you will not find is a DL-DVI port; after going down to 1 port on Radeon HD 7970 and back to 2 ports on the Radeon R9 290X, AMD has eliminated the port entirely. Physically I believe this to be based on the fact that the size of the card combined with the pump doesn’t leave room for a DVI port on the second row, though I will also note that AMD announced a few years ago that they were targeting a 2015 date to begin removing DVI ports.

What this means is that the R9 Fury X can natively only drive newer DisplayPort and HDMI equipped monitors. While DVI-only monitors are rare in 2015 (essentially everything has an HDMI port), owners of only DVI-only monitors will be in an interesting situation. With DisplayPort AMD has plenty of flexibility – it can essentially be converted to anything with the right adapter – so owners of DVI-only monitors would either need a cheap passive DP-to-SL-DVI adapter for single link DVI monitors, while they will need a more expensive DP-to-DL-DVI adapter for dual link DVI monitors. Cutting off DVI users was always going to be hard and R9 Fury X doesn’t make it any easier, but on the other hand there’s no getting around the fact that the DVI connector is large, outdated, and space-inefficient in 2015.

Wrapping things up, as far as build quality goes the R9 Fury X is without a doubt the best designed reference Radeon card to date. AMD has learned quite a bit from the R9 290X, and while there is a balance of factors in play here, there is no question that AMD has addressed the build quality issues of their past reference cards in a big way. We have moved on from the late 2000s and early 2010s, and the days of cards like the GTX 280, GTX 480, Radeon HD 6990, and Radeon R9 290X should be behind us. PC video cards, even high-end cards, do not and should not need to be that noisy ever again.

With that said, I do have some general concerns about the fact that the only cards to ship with a high-clocked fully-enabled Fiji GPU will be liquid cooled. Until now air cooling has always been the baseline and liquid cooling the niche alternative, and while I strongly favor quieter cards, there is none the less the question about what it means when AMD tells us that R9 Fury X will only be available with liquid cooling. After the limits of air cooling put a lid on GPU power consumption, will the switch to a CLLC by AMD usher in a new war of power, where everyone is able to once again ramp up power consumption thanks to the greater cooling capabilities of a CLLC?

In the meantime AMD considers the CLLC to be an advantage for them, not just for technical reasons but for marketing reasons. The bill of materials cost on the CLLC is quite high – likely around $80 even in volume for AMD – so don’t be surprised to see if AMD includes that on their cost calculus when promoting the card. They spent the money on a more effective cooler, and they want buyers to know that this is a cost baked in to the base price of the card.

That said, it’s important here to note that this is nothing NVIDIA couldn’t replicate if they really wanted to. Their partners already sell CLLC cards as premium cards, which means AMD has to tread carefully here as NVIDIA could easily go CLLC at a lower price and erase AMD’s advantage, though not without taking a hit to their margins (something I suspect AMD would be just fine with).

Finally, as we’ve already hashed out the implications of 4GB of VRAM a few pages back when addressing the 4GB question, we won’t repeat ourselves in length here. But in summary, while 4GB of VRAM is enough for now, it is only just. The R9 Fury X is likely to face VRAM pressure in under 2 years.

Today’s Review: Radeon R9 Fury X The State of Mantle, The Drivers, & The Test
Comments Locked

458 Comments

View All Comments

  • testbug00 - Sunday, July 5, 2015 - link

    You don't need architecture improvements to use DX12/Vulkan/etc. The APIs merely allow you to implement them over DX11 if you choose to. You can write a DX12 game without optimizing for any GPUs (although, not doing so for GCN given consoles are GCN would be a tad silly).

    If developers are aiming to put low level stuff in whenever they can than the issue becomes that due to AMD's "GCN everywhere" approach developers may just start coding for PS4, porting that code to Xbox DX12 and than porting that to PC with higher textures/better shadows/effects. In which Nvidia could take massive performance deficites to AMD due to not getting the same amount of extra performance from DX12.

    Don't see that happening in the next 5 years. At least, not with most games that are console+PC and need huge performance. You may see it in a lot of Indie/small studio cross platform games however.
  • RG1975 - Thursday, July 2, 2015 - link

    AMD is getting there but, they still have a little bit to go to bring us a new "9700 Pro". That card devastated all Nvidia cards back then. That's what I'm waiting for to come from AMD before I switch back.
  • Thatguy97 - Thursday, July 2, 2015 - link

    would you say amd is now the "geforce fx 5800"
  • piroroadkill - Thursday, July 2, 2015 - link

    Everyone who bought a Geforce FX card should feel bad, because the AMD offerings were massively better. But now AMD is close to NVIDIA, it's still time to rag on AMD, huh?

    That said, of course if I had $650 to spend, you bet your ass I'd buy a 980 Ti.
  • Thatguy97 - Thursday, July 2, 2015 - link

    oh believe me i remember they felt bad lol but im not ragging on amd but nvidia stole their thunder with the 980 ti
  • KateH - Thursday, July 2, 2015 - link

    C'mon, Fury isn't even close to the Geforce FX level of fail. It's really hard to overstate how bad the FX5800 was, compared to the Radeon 9700 and even the Geforce 4600Ti.

    The Fury X wins some 4K benchmarks, the 980Ti wins some. The 980Ti uses a bit less power but the Fury X is cooler and quieter.

    Geforce FX level of fail would be if the Fury X was released 3 months from now to go up against the 980Ti with 390X levels of performance and an air cooler.
  • Thatguy97 - Thursday, July 2, 2015 - link

    To be fair the 5950 ultra was actually decent
  • Morawka - Thursday, July 2, 2015 - link

    your understating nvidia's scores.. the won 90% of all benchmarks, not just "some". a full 120W more power under furmark load and they are using HBM!!
  • looncraz - Thursday, July 2, 2015 - link

    Furmark power load means nothing, it is just a good way to stress test and see how much power the GPU is capable of pulling in a worst-case scenario and how it behaves in that scenario.

    While gaming, the difference is miniscule and no one will care one bit.

    Also, they didn't win 90% of the benchmarks at 4K, though they certainly did at 1440. However, the real world isn't that simple. A 10% performance difference in GPUs may as well be zero difference, there are pretty much no game features which only require a 10% higher performance GPU to use... or even 15%.

    As for the value argument, I'd say they are about even. The Fury X will run cooler and quieter, take up less space, and will undoubtedly improve to parity or beyond the 980Ti in performance with driver updates. For a number of reasons, the Fury X should actually age better, as well. But that really only matters for people who keep their cards for three years or more (which most people usually do). The 980Ti has a RAM capacity advantage and an excellent - and known - overclocking capacity and currently performs unnoticeably better.

    I'd also expect two Fury X cards to outperform two 980Ti cards with XFire currently having better scaling than SLI.
  • chizow - Thursday, July 2, 2015 - link

    The differences in minimums aren't miniscule at all, and you also seem to be discounting the fact 980Ti overclocks much better than Fury X. Sure XDMA CF scales better when it works, but AMD has shown time and again, they're completely unreliable for timely CF fixes for popular games to the point CF is clearly a negative for them right now.

Log in

Don't have an account? Sign up now