The AMD Radeon R9 Fury Review, Feat. Sapphire & ASUS
by Ryan Smith on July 10, 2015 9:00 AM ESTOverclocking
Finally, no review of a high-end video card would be complete without a look at overclocking performance.
As was the case with the R9 Fury X two weeks ago, overclockers looking at out of the box overclocking performance are going to come away disappointed with the R9 Fury cards. While cooling and power delivery are overbuilt on both the Asus and Sapphire cards, the R9 Fury is still very restricted when it comes to overclocking. There is no voltage control at this time (even unofficial) and the card’s voltage profile has been finely tuned to avoid needing to supply the card with more voltage than is necessary. As a result the card has relatively little overclocking potential without voltage adjustments.
Radeon R9 Fury Series Overclocking | |||||
Ref. R9 Fury X | ASUS R9 Fury | Sapphire R9 Fury OC | |||
Boost Clock | 1125MHz | 1075MHz | 1100MHz | ||
Memory Clock | 1Gbps (500MHz DDR) | 1.1Gbps (550MHz DDR) | 1.1Gbps (550MHz DDR) | ||
Power Limit | 100% | 115% | 100% | ||
Max Voltage | 1.212v | 1.169v | 1.212v |
Neither R9 Fury card is able to overclock as well as our R9 Fury X, indicating that these are likely lower quality (or lower headroom) chips. Ultimately we’re able to get another 75MHz out of the ASUS, for 1075MHz, and another 60MHz out of the Sapphire, for 1100MHz.
Meanwhile with unofficial memory overclocking support now attainable via MSI Afterburner, we’ve also tried our hand at memory overclocking. There’s not a ton of headroom here before artifacting sets in, but we were able to get another 10% (50MHz) out of both R9 Fury cards.
Using our highest clocking card as a reference point, the Sapphire card, the actual performance gains are in the 7-10% range, with an average right up the middle at 8% over a reference clocked R9 Fury. This is actually a bit better than the R9 Fury X and its 5% performance gains, however it’s still not going to provide a huge difference in performance. We’d need to be able to overclock to better than 1100MHz to see any major overclocking gains on the R9 Fury cards.
288 Comments
View All Comments
FlushedBubblyJock - Wednesday, July 15, 2015 - link
Oh, gee, forgot, it's not amd's fault ... it was "developers and access" which is not amd's fault, either... of course...OMFG
redraider89 - Monday, July 20, 2015 - link
What's your excuse for being such an idiotic, despicable and ugly intel/nvidia fanboy? I don't know, maybe your parents? Somewhere you went wrong.OldSchoolKiller1977 - Sunday, July 26, 2015 - link
I am sorry and NVIDIA fan boys resort to name calling.... what was it that you said and I quote "Hypocrite" :)redraider89 - Monday, July 20, 2015 - link
Your problem is deeper than just that you like intel/nvidia since you apparently hate people who don't like those, and ONLY because they like something different than you do.ant6n - Saturday, July 11, 2015 - link
A third way to look at it is that maybe AMD did it right.Let's say the chip is built from 80% stream processors (by area), the most redundant elements. If some of those functional elements fail during manufacture, they can disable them and sell it as the cheaper card. If something in the other 20% of the chip fails, the whole chip may be garbage. So basically you want a card such that if all the stream processors are functional, the other 20% become the bottleneck, whereas if some of the stream processors fail and they have to sell it as a simple Fury, then the stream processors become the bottleneck.
thomascheng - Saturday, July 11, 2015 - link
That is probably AMD's smart play. Fury was always the intended card. Perfect cards will be the X and perhaps less perfect card will be the Nano.FlushedBubblyJock - Thursday, July 16, 2015 - link
"fury was always the intended card"ROFL
amd fanboy out much ?
I mean it is unbelievable, what you said, and that you said it.
theduckofdeath - Friday, July 24, 2015 - link
Just shut up, Bubby.akamateau - Tuesday, July 14, 2015 - link
Anand has been running DX12 benchmarks last spring. When they compared Radeon 290x to GTX 980 Ti nVidia ordered them to stop. That is why no more DX12 benchmarks have been run.Intel and nVidia are at a huge disadvantage with DX12 and Mantle.
The reason:
AMD IP: Asynchronous Shader Pipelines and Asynchronous Compute Engines.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
We saw mantle benchmarks so your fantasy is a bad amd fanboy delusion.