Final Words

Bringing this video card review to a close, we’ll start off with how the R9 Fury compares to its bigger sibling, the R9 Fury X. Although looking at the bare specifications of the two cards would suggest they’d be fairly far apart in performance, this is not what we have found. Between 4K and 1440p the R9 Fury’s performance deficit is only 7-8%, noticeably less than what we’d expect given the number of disabled CUs.

In fact a significant amount of the performance gap appears to be from the reduction in clockspeed, and not the number of CUs. And while overclocking back to R9 Fury X clockspeeds can’t recover all of the performance, it recovers a lot of it. This implies that Fiji on the whole is overweight on shading/texturing resources, as it’s not greatly impacted by having some of those resources cut off.

Consequently I can see why AMD opted to launch the R9 Fury X and R9 Fury separately, and to withhold the latter’s specifications until now, as this level of performance makes R9 Fury a bit of a spoiler for R9 Fury X. 7-8% makes R9 Fury notably slower than R9 Fury X, but it’s also $100 cheaper, or to turn this argument on its head, the last 10% or so that the R9 Fury X offers comes at quite the price premium. This arguably makes the R9 Fury the better value, and not that we’re complaining, but it does put AMD in an awkward spot.

As for the competition, that’s a bit more of a mixed bag. R9 Fury X had to compete with GTX 980 Ti but couldn’t surpass it, which hurt it and make the GTX the safer buy. On the other hand R9 Fury needs to compete with just the older GTX 980, and while it’s by no means a clean sweep for AMD, it’s a good outcome for AMD. The R9 Fury offers between 8% and 17% better performance than the GTX 980, depending on if we’re looking at 1440p or 4K. I don’t believe the R9 Fury is a great 4K card – if you really want 4K, you really need more rendering power at this time – but even at 1440p this is a solid performance lead.

Along with a performance advantage, the GTX 980 is also better competition for the R9 Fury (and Fiji in general) since the GTX 980 is only available with 4GB of VRAM. This negates the Fiji GPU’s 4GB HBM limit, which is one of the things that held back the R9 Fury X against the GTX 980 Ti. As a result there are fewer factors to consider, and in a straight-up performance shootout with the GTX 980 the R9 Fury is 10% more expensive for 8%+ better performance. This doesn’t make either card a notably better value, but makes the R9 Fury a very reasonable alternative to the GTX 980 on a price/performance basis.

The one area where the R9 Fury struggles however is power efficiency. GTX 980’s power efficiency is practically legendary at this point; R9 Fury’s is not. Even the lower power of our two R9 Fury cards, the ASUS STRIX, can’t come close to GTX 980’s efficiency. And that’s really all there is to that. If energy efficiency doesn’t matter to you then the R9 Fury’s performance is competitive, otherwise GTX 980 is a bit slower, a bit cheaper, and uses a lot less power. That said, AMD’s partners do deserve some credit for keeping their acoustics well under control despite the high power and heat load. It’s not an apples-to-apples comparison against the reference GTX 980 and its blower, but at the very least picking R9 Fury over GTX 980 doesn’t mean you have to pick a loud card as well.

And that brings us to the third aspect of this review, which is comparing the R9 Fury cards from Sapphire and ASUS. Both partners have come to the plate with some very good open air cooled designs, and while it’s a bit unusual for AMD to launch with so few partners, what those partners have put together certainly paint R9 Fury in a positive light.

Picking between the two ends up being a harder task than we expected, in part because of how different they are at times. From a performance perspective the two cards offer very similar performance, with Sapphire’s mild factory overclock giving them only the slightest of edges, which is more or less what we expected.

However the power and acoustics situation is very different. On its own the ASUS STRIX’s acoustics would look good, but compared to the Sapphire Tri-X’s deliciously absurd acoustics it’s the clear runner-up. On the other hand the ASUS card has a clear power efficiency advantage of its own, but I’m not convinced that this isn’t just a byproduct of the ASUS card randomly receiving a better chip. As a result I’m not convinced that this same efficiency advantage exists between all ASUS and Sapphire cards; ASUS’s higher voltage R9 Fury chips have to go somewhere.

In any case, both are solid cards, but if we have to issue a recommendation then it’s hard to argue with the Sapphire Tri-X’s pricing and acoustics right now. It’s the quietest of the R9 Fury cards, and it’s slightly cheaper as well. Otherwise ASUS’s strengths lie more on their included software and their reputation for support than in their outright performance in our benchmark suite.

And with that, we wrap up our review of the second product in AMD’s four Fiji launches. The R9 Fury was the last product with a scheduled launch date, however AMD has previously told us that the R9 Nano will launch this summer, meaning we should expect it in the next couple of months. With a focus on size and efficiency the R9 Nano should be a very different card from the R9 Fury and R9 Fury X, which makes us curious to see just what AMD can pull off when optimizing for efficiency over absolute performance. But that will be a question for another day.

Overclocking
Comments Locked

288 Comments

View All Comments

  • nightbringer57 - Friday, July 10, 2015 - link

    Intel kept it in stock for a while but it didn't sell. So the management decided to get rid of it, gave it away to a few colleagues (dell, HP, many OEMs used BTX for quite a while, both because it was a good user lock-down solution and because the inconvenients of BTX didn't matter in OEM computers, while the advantages were still here) and noone ever heard of it on the retail market again?
  • nightbringer57 - Friday, July 10, 2015 - link

    Damn those not-editable comments...
    I forgot to add: with the switch from the netburst.prescott architecture to Conroe (and its followers), CPU cooling became much less of a hassle for mainstream models so Intel did not have anything left to gain from the effort put into BTX.
  • xenol - Friday, July 10, 2015 - link

    It survived in OEMs. I remember cracking open Dell computers in the later half of 2000 and finding out they were BTX.
  • yuhong - Friday, July 10, 2015 - link

    I wonder if a BTX2 standard that fixes the problems of original BTX is a good idea.
  • onewingedangel - Friday, July 10, 2015 - link

    With the introduction of HBM, perhaps it's time to move to socketed GPUs.

    It seems ridiculous for the industry standard spec to devote so much space to the comparatively low-power CPU whilst the high-power GPU has to fit within the confines of (multiple) pci-e expansion slots.

    Is it not time to move beyond the confines of ATX?
  • DanNeely - Friday, July 10, 2015 - link

    Even with the smaller PCB footprint allowed by HBM; filling up the area currently taken by expansion cards would only give you room for a single GPU + support components in an mATX sized board (most of the space between the PCIe slots and edge of the mobo is used for other stuff that would need to be kept not replaced with GPU bits); and the tower cooler on top of it would be a major obstruction for any non-GPU PCIe cards you might want to put into the system.
  • soccerballtux - Friday, July 10, 2015 - link

    man, the convenience of the socketed GPU is great, but just think of how much power we could have if it had it's own dedicated card!
  • meacupla - Friday, July 10, 2015 - link

    The clever design trend, or at least what I think is clever, is where the GPU+CPU heatsinks are connected together, so that, instead of many smaller heatsinks trying to cool one chip each, you can have one giant heatsink doing all the work, which can result in less space, as opposed to volume, being occupied by the heatsink.

    You can see this sort of design on high end gaming laptops, Mac Pro, and custom water cooling builds. The only catch is, they're all expensive. Laptops and Mac Pro are, pretty much, completely proprietary, while custom water cooling requires time and effort.

    If all ATX mobos and GPUs had their core and heatsink mounting holes in the exact same spot, it would be much easier to design a 'universal multi-core heatsink' that you could just attach to everything that needs it.
  • Peichen - Saturday, July 11, 2015 - link

    That's quite a good idea. With heat-pipes, distance doesn't really matter so if there is a CPU heatsink that can extend 4x 8mm/10mm heatpipes over the videocard to cooled the GPU, it would be far quieter than the 3x 90mm can cooler on videocard now.
  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    330 watts transferred to the low lying motherboard, with PINS attached to amd's core failure next...
    Slap that monster heat onto the motherboard, then you can have a giant green plastic enclosure like Dell towers to try to move that heat outside the case... oh, plus a whole 'nother giant VRM setup on the motherboard... yeah they sure will be doing that soon ... just lay down that extra 50 bucks on every motherboard with some 6X VRM's just incase amd fanboy decides he wants to buy the megawatter amd rebranded chip...

    Yep, NOT HAPPENING !

Log in

Don't have an account? Sign up now