Meet The ASUS STRIX R9 Fury

Our second card of the day is ASUS’s STRIX R9 Fury, which arrived just in time for the article cutoff. Unlike Sapphire, Asus is releasing just a single card, the STRIX-R9FURY-DC3-4G-GAMING.

Radeon R9 Fury Launch Cards
  ASUS STRIX R9 Fury Sapphire Tri-X R9 Fury Sapphire Tri-X R9 Fury OC
Boost Clock 1000MHz /
1020MHz (OC)
1000MHz 1040MHz
Memory Clock 1Gbps HBM 1Gbps HBM 1Gbps HBM
VRAM 4GB 4GB 4GB
Maximum ASIC Power 216W 300W 300W
Length 12" 12" 12"
Width Double Slot Double Slot Double Slot
Cooler Type Open Air Open Air Open Air
Launch Date 07/14/15 07/14/15 07/14/15
Price $579 $549 $569

With only a single card, ASUS has decided to split the difference between reference and OC cards and offer one card with both features. Out of the box the STRIX is a reference clocked card, with a GPU clockspeed of 1000MHz and memory rate of 1Gbps. However Asus also officially supports an OC mode, which when accessed through their GPU Tweak II software bumps up the clockspeed 20MHz to 1020MHz. With OC mode offering sub-2% performance gains there’s not much to say about performance; the gesture is appreciated, but with such a small overclock the performance gains are pretty trivial in the long run. Otherwise at stock the card should see performance similar to Sapphire’s reference clocked R9 Fury card.

Diving right into matters, for their R9 Fury card ASUS has opted to go with a fully custom design, pairing up a custom PCB with one of the company’s well-known DirectCU III coolers. The PCB itself is quite large, measuring 10.6” long and extending a further .6” above the top of the I/O bracket. Unfortunately we’re not able to get a clear shot of the PCB since we need to maintain the card in working order, but judging from the design ASUS has clearly overbuilt it for greater purposes. There are voltage monitoring points at the front of the card and unpopulated positions that look to be for switches. Consequently I wouldn’t be all that surprised if we saw this PCB used in a higher end card in the future.

Moving on, since this is a custom PCB ASUS has outfitted the card with their own power delivery system. ASUS is using a 12 phase design here, backed by the company’s Super Alloy Power II discrete components. With their components and their “auto-extreme” build process ASUS is looking to make the argument that the STRIX is a higher quality card, and while we’re not discounting those claims they’re more or less impossible to verify, especially compared to the significant quality of AMD’s own reference design.

Meanwhile it comes as a bit of a surprise that even with such a high phase count, ASUS’s default power limits are set relatively low. We’re told that the card’s default ASIC power limit is just 216W, and our testing largely concurs with this. The overall board TBP is still going to be close to AMD’s 275W value, but this means that Asus has clamped down on the bulk of the card’s TDP headroom by default. The card has enough headroom to sustain 1000MHz in all of our games – which is what really matters – while FurMark runs at a significantly lower frequency than any R9 Fury series cards built on AMD’s PCB as a result of the lower power limit. As a result ASUS also bumps up the power limit by 10% when in OC mode to make sure there’s enough headroom for the higher clockspeeds. Ultimately this doesn’t have a performance impact that we can find, and outside of FurMark it’s unlikely to save any power, but given what Fiji is capable of with respect to both performance and power consumption, this is an interesting design choice on ASUS’s part.

PCB aside, let’s cover the rest of the card. While the PCB is only 10.6” long, ASUS’s DirectCU III cooler is larger yet, slightly overhanging the PCB and extending the total length of the card to 12”. Here ASUS uses a collection of stiffeners, screws, and a backplate to reinforce the card and support the bulky heatsink, giving the resulting card a very sturdy design. In a first for any design we’ve seen thus far, the backplate is actually larger than the card, running the full 12” to match up with the heatsink, and like the Sapphire backplate includes a hole immediately behind the Fiji GPU to allow the many capacitors to better cool. Meanwhile builders with large hands and/or tiny cases will want to make note of the card’s additional height; while the card will fit most cases fine, you may want a magnetic screwdriver to secure the I/O bracket screws, as the additional height doesn’t leave much room for fingers.

For the STRIX ASUS is using one of the company’s triple-fan DirectCU III coolers. Starting at the top of the card with the fans, ASUS calls the fans on this design their “wing-blade” fans. Measuring 90mm in diameter, ASUS tells us that this fan design has been optimized to increase the amount of air pressure on the edge of the fans.

Meanwhile the STRIX also implements ASUS’s variation of zero fan speed idle technology, which the company calls 0dB Fan technology. As one of the first companies to implement zero fan speed idling, the STRIX series has become well known for this feature and the STRIX R9 Fury is no exception. Thanks to the card’s large heatsink ASUS is able to power down the fans entirely while the card is near or at idle, allowing the card to be virtually silent under those scenarios. In our testing this STRIX card has its fans kick in at 55C and shutting off again at 46C.

ASUS STRIX R9 Fury Zero Fan Idle Points
  GPU Temperature Fan Speed
Turn On 55C 28%
Turn Off 46C 25%

As for the DirectCU III heatsink on the STRIX, as one would expect ASUS has gone with a large and very powerful heatsink to cool the Fiji GPU underneath. The aluminum heatsink runs just shy of the full length of the card and features 5 different copper heatpipes, the largest of the two coming in at 10mm in diameter. The heatpipes in turn make almost direct contact with the GPU and HBM, with ASUS having installed a thin heatspeader of sorts to compensate for the uneven nature of the GPU and HBM stacks.

In terms of cooling performance AMD’s Catalyst Control Center reports that ASUS has capped the card at 39% fan speed, though in our experience the card actually tops out at 44%. At this level the card will typically reach 44% by the time it hits 70C, at which point temperatures will rise a bit more before the card reaches homeostasis. We’ve yet to see the card need to ramp past 44%, though if the temperature were to exceed the temperature target we expect that the fans would start to ramp up further. Without overclocking the highest temperature measured was 78C for FurMark, while Crysis 3 topped out at a cooler 71C.

Moving on, ASUS has also adorned the STRIX with a few cosmetic adjustments of their own. The top of the card features a backlit STRIX logo, which pulsates when the card is turned on. And like some prior ASUS cards, there are LEDs next to each of the PCIe power sockets to indicate whether there is a full connection. On that note, with the DirectCU III heatsink extending past the PCIe sockets, ASUS has once again flipped the sockets so that the tabs face the rear of the card, making it easier to plug and unplug the card even with the large heatsink.

Since this is an ASUS custom PCB, it also means that ASUS has been able to work in their own Display I/O configuration. Unlike the AMD reference PCB, for their custom PCB ASUS has retained a DL-DVI-D port, giving the card a total of 3x DisplayPorts, 1x HDMI port, and 1x DL-DVI-D port. So buyers with DL-DVI monitors not wanting to purchase adapters will want to pay special attention to ASUS’s card.

Finally, on the software front, the STRIX includes the latest iteration of ASUS’s GPU Tweak software, which is now called GPU Tweak II. Since the last time we took at look at GPU Tweak the software has undergone a significant UI overhaul, with ASUS giving it more distinct basic and professional modes. It’s through GPU Tweak II that the card’s OC mode can be accessed, which bumps up the card’s clockspeed to 1020MHz. Meanwhile the other basic overclocking and monitoring functions one would expect from a good overclocking software package are present; GPU Tweak II allows control over clockspeeds, fan speeds, and power targets, while also monitoring all of these features and more.

GPU Tweak II also includes a built-in copy of the XSplit game broadcasting software, along with a 1 year premium license. Finally, perhaps the oddest feature of GPU Tweak II is the software’s Gaming Booster feature, which is ASUS’s system optimization utility. Gaming Booster can adjust the system visual effects, system services, and perform memory defragmentation. To be frank, ASUS seems like they were struggling to come up with something to differentiate GPU Tweak II here; messing with system services is a bad idea, and system memory defragmentation is rarely necessary given the nature and abilities of Random Access Memory.

Wrapping things up, the ASUS STRIX R9 Fury will be the most expensive of the R9 Fury launch cards. ASUS is charging a $30 premium for the card, putting the MSRP at $579.

Meet The Sapphire Tri-X R9 Fury OC The Test
Comments Locked

288 Comments

View All Comments

  • CiccioB - Monday, July 13, 2015 - link

    The myth, here again!
    Let's see these numbers of a miraculous vs crippling driver.
    And I mean I WANT NUMBNERS!
    Or what you are talking about is just junk you are reporting because you can't elaborate yourself.
    Come on, the numbers!!!!!!!!!
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    So you lied loguerto, but the sad truth is amd bails on it's cards and drivers for them FAR FAR FAR sooner than nvidia does.
    YEARS SOONER.

    Get with it bub.
  • Count Vladimir - Thursday, July 16, 2015 - link

    Hard evidence or gtfo.
  • Roboyt0 - Sunday, July 12, 2015 - link

    I am very interested to see how much of a difference ASUS' power delivery system will make for (real) overclocking in general once voltage control is available. If these cards act the same as the 290's did, then AMD's default VRM setup could very likely be more than capable of overclocks in the 25% or more range. I'm basing the 25% or more off of my experience with a half dozen reference based R9 290's, default 947MHz core, that would reach 1200 core clock with ~100mV additional. And if you received a capable card then you could surpass those clocks with more voltage.

    It appears AMD has followed the EXACT same path they did with the 290 and 290X. The 290X always held a slight lead in performance, but the # of GPU components disabled didn't hinder the 290 as much as anyone thought. This is exactly what we see now with the Fury ~VS~ Fury X...overclock the Fury and it's the better buy. All while the Fury X is there for those who want that little bit of extra performance for the premium, and this time you're getting water cooling! It seems like a pretty good deal to me.

    Once 3rd party programmers(not AMD) figure out voltage control for these cards, history will likely repeat itself for AMD. Yes, these will run hotter and use more power than their Nvidia counterparts...I don't see why this is a shock to anyone since this is still 28nm and similar enough to Hawaii...What no one seems to mention is the amount of performance increase compared to Hawaii in the same power/thermal envelope..it's a very significant jump.

    Whom in the enthusiast PC world really cares about the additional power draw? We're looking at 60-90W under normal load conditions; Furmark is NOT normal load. Unless electricity where you hail from is that expensive, it isn't actually costing you that much more in the long run. If you're in the market for a ~$550 GPU, then you probably aren't too concerned with buying a good PSU. What the FurMark power draw of the Fury X/Sapphire Fury really tell us is that the reference PCB is capable of handling 385W+ of draw. This should give an idea of what the card can do once we are able to control the voltage.

    These cards are enthusiast grade and plenty of those users will remove the included cooler for maximum performance. A full cover waterblock is going to be the key to releasing the full potential of Fury(X) just like it was for 290(X). It is a definite plus to see board partners with solid air cooling solutions out of the gate though...Sapphire's cooling solution fares better in temperature AND noise during FurMark than ASUS' when it's pulling 130W additional power! Way to go Sapphire!

    My rant will continue concerning drivers. Nvidia has mature hardware with mature drivers. The fact AMD is keeping up, or winning is some instances, is a solid achievement. Go back to a 290(X) review when their primary competition was a 780 Ti, where the 780 Ti was usually winning. Now, the 390(X), that so many are calling a rebranded POS, easily bests the 780 Ti and competes with GTX 980. Nvidia changed architecture, but AMD is still competitive? Another commenter said it best by saying: "An AMD GPU is like a fine wine, and gets better with age."

    This tells me 3 things...

    1) Once drivers mature, AMD stands to gain solid performance improvements.
    2) Adding voltage control to enable actual overclocking will show the true potential of these cards.
    3) Add these two factors together and AMD has another winning product.

    Lastly we still have DX12 to factor into all of this. Sure, you can say DX12 is too far away, but in actuality it is not. I know there are those people who MUST HAVE the latest and greatest hardware every time something new comes around every ~9 months. However, there are plenty more of us who wait a few generations of GPUs to upgrade. If DX12 brings even a half of the anticipated performance gains and you're in the market, then purchasing this card now, or in the coming months, will be a solid investment for the coming years.
  • Peichen - Monday, July 13, 2015 - link

    Whatever flats your boat. There are still some people like you that believes FX CPUs are faster than i7s and they are what keeps AMD afloat. The rest of us.... we actually consider everything and go Intel & Nvidia.

    There are 3 fails in your assumptions:
    1. Fiji is a much bigger core tied to 4 HBM modules. OC will likely not be as "smooth" as 290X
    2. 60-90W is not just cost in electricity. It is also getting a PSU that will supply the additional draw and more fan(s) and better case to get the heat out. Or suffer the heat and noise. The $15-45 a year in additional electricity bill also means you will be in the red in a couple od years.
    3. You assume AMD/ATI driver team is still around and will be around a couple of years in the future.
  • silverblue - Tuesday, July 14, 2015 - link

    3. Unless the driver work has been completely outsourced and there's proof of this happening, I'm not sure you can use this as a "fail".

    Fiji isn't a brand new version of GCN so I don't expect the huge gains in performance that are being touted, however whatever they do bring to the table should benefit Tonga as well, which will (hopefully) distance itself from Tahiti and perhaps improve sales further down the stack.
  • Count Vladimir - Thursday, July 16, 2015 - link

    Honestly, driver outsourcing might be for the best in case of AMD.
  • Oxford Guy - Wednesday, July 15, 2015 - link

    The most electrically efficient 3D computer gaming via an ARM chip, right? Think of all the wasted watts for these big fancy GPUs. Even more efficient are text-based games.
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    You forgot he said spend a hundred and a half on a waterblock...for the amd card, for "full potential"..

    ROFL - once again the future that never comes is very bright and very expensive.
  • beck2050 - Monday, July 13, 2015 - link

    A bit disingenuous as custom cooled over clocked 980s are the norm these days and easily match or exceed Fury, while running cooler with much less power and can be found cheaper. AMD HAS its work cut out.

Log in

Don't have an account? Sign up now