Meet The Sapphire Tri-X R9 Fury OC

Today we’ll be looking at Fury cards from both Sapphire and Asus. We’ll kick things off with Sapphire’s card, the Tri-X R9 Fury OC.

Radeon R9 Fury Launch Cards
  ASUS STRIX R9 Fury Sapphire Tri-X R9 Fury Sapphire Tri-X R9 Fury OC
Boost Clock 1000MHz /
1020MHz (OC)
1000MHz 1040MHz
Memory Clock 1Gbps HBM 1Gbps HBM 1Gbps HBM
VRAM 4GB 4GB 4GB
Maximum ASIC Power 216W 300W 300W
Length 12" 12" 12"
Width Double Slot Double Slot Double Slot
Cooler Type Open Air Open Air Open Air
Launch Date 07/14/15 07/14/15 07/14/15
Price $579 $549 $569

Sapphire is producing this card in two variants, a reference clocked version and a factory overclocked version. The version we’ve been sampled is the factory overclocked version, though other than some basic binning to identify cards that can handle being overclocked, the two cards are physically identical.

As far as Sapphire’s overclock goes, it’s a mild overclock, with the card shipping at 1040MHz for the GPU while the memory remains unchanged at 1Gbps. As we discussed in our R9 Fury X review, Fiji cards so far don’t have much in the way of overclocking headroom, so AMD’s partners have to take it easy on the factory overclocks. Sapphire’s overclock puts the upper-bound of any performance increase at 4% – with the real world gains being smaller – so this factory overclock is on the edge of relevance.

Getting down to the nuts and bolts then, Sapphire’s card is a semi-custom design, meaning Sapphire has paired an AMD reference PCB with a custom cooler. The PCB in question is AMD’s PCB from the R9 Fury X, so there’s little new to report here. The PCB itself measures 7.5” long and features AMD’s 6 phase power design, which is designed to handle well over 300W. For overclockers there is still no voltage control options available for this board design, though as Sapphire has retained AMD’s dual BIOS functionality there’s plenty of opportunity for BIOS modding.

The real story here is Sapphire’s Tri-X cooler, which gets the unenviable job of replacing AMD’s closed loop liquid cooler from the R9 Fury X. With a TBP of 275W Sapphire needs to be able to dissipate quite a bit of heat to keep up with Fiji, which has led to the company using one of their Tri-X coolers. We’ve looked at a few different Tri-X cards over the years, and they have been consistently impressive products. For the Tri-X R9 Fury, Sapphire is aiming for much the same.

Overall the Tri-X cooler used on the Tri-X R9 Fury ends up being quite a large cooler. Measuring a full 12” long it runs the length of the PCB and then some, meanwhile with that much copper and aluminum it’s not a light card either. The end result is that with such a large cooler the card is better defined as a PCB mounted on a cooler than a cooler mounted on a PCB, an amusing juxtaposition from the usual video card. As a result of this Sapphire has gone the extra mile to ensure that the PCB can support the cooler; there are screws in every last mounting hole, there’s a full-sized backplate to further reinforce the card, and the final 4.5” of the cooler that isn’t mounted to the PCB has its own frame to keep that secure as well.

Moving to the top of the card, the Tri-X R9 Fury features three of Sapphire’s 90mm “Aerofoil” fans, the company’s larger, dual ball bearing fans. These fans are capable of moving quite a bit of air even when moving at relatively low speeds, and as a result the overall card noise is kept rather low even under load, as we’ll see in full detail in our benchmark section.

Meanwhile Sapphire has also implemented their version of zero fan speed idle on the Tri-X R9 Fury, dubbed Intelligent Fan Control, which allows the card to turn off its fans entirely when their cooling capacity isn’t needed. With such a large heatsink the Fiji GPU and supporting electronics don’t require active cooling when idling, allowing Sapphire to utilize passive cooling and making the card outright silent at idle. This is a feature a number of manufacturers have picked up on in the last couple of years, and the silent idling this allows is nothing short of amazing. For Sapphire’s implementation on the Tri-X R9 Fury, what we find is that the fans finally get powered up at around 53C, and power down when the temperature falls below 44C.

Sapphire Tri-X R9 Fury Zero Fan Idle Points
  GPU Temperature Fan Speed
Turn On 53C 27%
Turn Off 44C 23%

Helping the cooling effectiveness of the Tri-X quite a bit is the length of the fans and heatsink relative to the length of the PCB. With the 4.5” of overhang, the farthest fan is fully beyond the PCB. That means that all of the air it pushes through the heatsink doesn’t get redirected parallel to the card – as is the case normally for open air cards – but rather the hot air goes straight through the heatsink and past it. For a typical tower case this means that hot air goes straight up towards the case’s exhaust fans, more efficiently directing said hot air outside of the case and preventing it from being recirculated by the card’s fans. While this doesn’t make a night & day difference in cooling performance, it’s a neat improvement that sidesteps the less than ideal airflow situation the ATX form factor results in.

Moving on, let’s take a look at the heatsink itself. The Tri-X’s heatsink runs virtually the entire length of the card, and is subdivided into multiple segments. Connecting these segments are 7 heatpipes, ranging in diameter between 6mm and 10mm. The heatpipes in turn run through both a smaller copper baseplate that covers the VRM MOSFETs, and a larger copper baseplate that covers the Fiji GPU itself. Owners looking to modify the card or otherwise remove the heatsink will want to take note here; we’re told that it’s rather difficult to properly reattach the heatsink to the card due to the need to perfectly line up the heatsink and mate it with the GPU and the HBM stacks.

The Tri-X R9 Fury’s load temperatures tend to top out at 75C, which is the temperature limit Sapphire has programmed the card for. As with the R9 Fury X and the reference Radeon 290 series before that, Sapphire is utilizing AMD’s temperature and fan speed target capabilities, so while the card will slowly ramp up the fan to 75C, once it hits that temperature it will more greatly ramp up the fan to keep the temperature at or below 75C.

Moving on, since Sapphire is using AMD’s PCB, this means the Tri-X also inherits the former’s BIOS and lighting features. The dual-BIOS switch is present, and Sapphire ships the card with two different BIOSes. The default BIOS (switch right) uses the standard 300W ASIC power limit and 75C temperature target. Meanwhile the second BIOS (switch left) Increases the power and temperature limits to 350W and 80C respectively, for greater overclocking limits. Note however that this doesn’t change the voltage curve, so Fury cards in general will still be held back by a lack of headroom at stock voltages. As for the PCB’s LEDs, Sapphire has retained those as well, though they default to blue (sapphire) rather than AMD red.

Finally, since this is the AMD PCB, display I/O remains unchanged. This means the Tri-X offers 3x DisplayPorts along with a single HDMI 1.4 port.

Wrapping things up, the OC version we are reviewing today will retail for $569, $20 over AMD’s MSRP. The reference clocked version on the other hand will retail at AMD’s MSRP of $549, the only launch card that will be retailing at this price. Finally, Sapphire tells us that the OC version will be the rarer of the two due to its smaller run, and that the majority of Tri-X R9 Fury cards that will be on sale will be the reference clocked version.

The AMD Radeon R9 Fury Review Meet The ASUS STRIX R9 Fury
Comments Locked

288 Comments

View All Comments

  • nightbringer57 - Friday, July 10, 2015 - link

    Intel kept it in stock for a while but it didn't sell. So the management decided to get rid of it, gave it away to a few colleagues (dell, HP, many OEMs used BTX for quite a while, both because it was a good user lock-down solution and because the inconvenients of BTX didn't matter in OEM computers, while the advantages were still here) and noone ever heard of it on the retail market again?
  • nightbringer57 - Friday, July 10, 2015 - link

    Damn those not-editable comments...
    I forgot to add: with the switch from the netburst.prescott architecture to Conroe (and its followers), CPU cooling became much less of a hassle for mainstream models so Intel did not have anything left to gain from the effort put into BTX.
  • xenol - Friday, July 10, 2015 - link

    It survived in OEMs. I remember cracking open Dell computers in the later half of 2000 and finding out they were BTX.
  • yuhong - Friday, July 10, 2015 - link

    I wonder if a BTX2 standard that fixes the problems of original BTX is a good idea.
  • onewingedangel - Friday, July 10, 2015 - link

    With the introduction of HBM, perhaps it's time to move to socketed GPUs.

    It seems ridiculous for the industry standard spec to devote so much space to the comparatively low-power CPU whilst the high-power GPU has to fit within the confines of (multiple) pci-e expansion slots.

    Is it not time to move beyond the confines of ATX?
  • DanNeely - Friday, July 10, 2015 - link

    Even with the smaller PCB footprint allowed by HBM; filling up the area currently taken by expansion cards would only give you room for a single GPU + support components in an mATX sized board (most of the space between the PCIe slots and edge of the mobo is used for other stuff that would need to be kept not replaced with GPU bits); and the tower cooler on top of it would be a major obstruction for any non-GPU PCIe cards you might want to put into the system.
  • soccerballtux - Friday, July 10, 2015 - link

    man, the convenience of the socketed GPU is great, but just think of how much power we could have if it had it's own dedicated card!
  • meacupla - Friday, July 10, 2015 - link

    The clever design trend, or at least what I think is clever, is where the GPU+CPU heatsinks are connected together, so that, instead of many smaller heatsinks trying to cool one chip each, you can have one giant heatsink doing all the work, which can result in less space, as opposed to volume, being occupied by the heatsink.

    You can see this sort of design on high end gaming laptops, Mac Pro, and custom water cooling builds. The only catch is, they're all expensive. Laptops and Mac Pro are, pretty much, completely proprietary, while custom water cooling requires time and effort.

    If all ATX mobos and GPUs had their core and heatsink mounting holes in the exact same spot, it would be much easier to design a 'universal multi-core heatsink' that you could just attach to everything that needs it.
  • Peichen - Saturday, July 11, 2015 - link

    That's quite a good idea. With heat-pipes, distance doesn't really matter so if there is a CPU heatsink that can extend 4x 8mm/10mm heatpipes over the videocard to cooled the GPU, it would be far quieter than the 3x 90mm can cooler on videocard now.
  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    330 watts transferred to the low lying motherboard, with PINS attached to amd's core failure next...
    Slap that monster heat onto the motherboard, then you can have a giant green plastic enclosure like Dell towers to try to move that heat outside the case... oh, plus a whole 'nother giant VRM setup on the motherboard... yeah they sure will be doing that soon ... just lay down that extra 50 bucks on every motherboard with some 6X VRM's just incase amd fanboy decides he wants to buy the megawatter amd rebranded chip...

    Yep, NOT HAPPENING !

Log in

Don't have an account? Sign up now