Overclocking

Finally, no review of a high-end video card would be complete without a look at overclocking performance.

To get right to the point here, overclockers looking at out of the box overclocking performance are going to come away disappointed. While cooling and power delivery are overbuilt, in other respects the R9 Fury X is very locked down when it comes to overclocking. There is no voltage control at this time (even unofficial), there is no official HBM clockspeed control, and the card’s voltage profile has been finely tuned to avoid needing to supply the card with more voltage than is necessary. As a result the card has relatively little overclocking potential without voltage adjustments.

So what do we get for overclocking?

Radeon R9 Fury X Overclocking
  Stock Overclocked
Boost Clock 1050Mhz 1125MHz
Memory Clock 1Gbps (500MHz DDR) 1Gbps (500MHz DDR)
Max Voltage N/A N/A

Our efforts net us 75MHz, which is actually 25MHz less than what AMD published in their reviewer’s guide. Even 100MHz would lead to artifacting in some games, requiring that we step down to a 75MHz overclock to have a safe and sustainable overclock.

The end result is that the overclocked R9 Fury X runs at 1125MHz core and 1Gbps memory, a 75MHz (7%) increase in the GPU clockspeed and 0% increase in the memory clockspeed. This puts a very narrow window on expected performance gains, as we shouldn’t exceed a 7% gain in any game, and will almost certainly come in below 7% in most games.

OC: Battlefield 4 - 3840x2160 - Ultra Quality - 0x MSAA

Our gaming benchmarks find just that. A few percent performance improvement there, a 5% improvement there. Overall we wouldn’t go as far as saying there no reason to overclock, but with such limited gains it’s hardly worth the trouble right now.

True overclocking is going to have to involve BIOS modding, a riskier and warranty-voiding strategy, but one that should be far more rewarding. With more voltage I have little doubt that R9 Fury X could clock higher, though it’s impossible to guess by how much at this time. In any case the card is certainly built for it, as the oversized cooler, high power delivery capabilities, and dual BIOS switch provide all the components necessary for such an overclocking attempt.

Meanwhile HBM is a completely different bag, and while unofficial overclocking is looking promising, as a new technology it will take some time to get a good feel for it and understand just what kind of performance improvements it can deliver. The R9 Fury X is starting out with quite a bit of memory bandwidth right off the bat (512GB/sec), so it may not be bandwidth starved as often as other cards like the R9 290X was.

Power, Temperature, & Noise Final Words
Comments Locked

458 Comments

View All Comments

  • chizow - Thursday, July 2, 2015 - link

    What about geometry Ryan? ROPs are often used interchangeably with Geometry/Set-up engine, there is definitely something going on with Fury X at lower resolutions, in instances where SP performance is no problem, it just can't draw/fill pixels fast enough and performs VERY similarly to previous gen or weaker cards (290X/390X and 980). TechReport actually has quite a few theoreticals that show this, where their pixel fill is way behind GM200 and much closer to Hawaii/GM204.
  • extide - Thursday, July 2, 2015 - link

    Yeah my bet is on Geometry. Check out the Synthetics page. It own the Pixel and Texel fillrate tests, but loses on the Tessellation test which has a large dependency on geometry. nVidia has also been historically very strong with geometry.
  • CajunArson - Thursday, July 2, 2015 - link

    Thanks for the review! While the conclusions aren't really any different than all the other reputable review sites on the Interwebs, you were very thorough and brought an interesting perspective to the table too. Better late than never!
  • NikosD - Thursday, July 2, 2015 - link

    You must use the latest nightly build of LAV filters, in order to be able to use the 4K H.264 DXVA decoder of AMD cards.
    All previous builds fall back to SW mode.
  • tynopik - Thursday, July 2, 2015 - link

    "today’s launch of the Fiji GPU"
  • andychow - Thursday, July 2, 2015 - link

    Best review ever. Worth the wait. Get sick more often!
  • tynopik - Thursday, July 2, 2015 - link

    pg 2 - compression taking palce
  • limitedaccess - Thursday, July 2, 2015 - link

    Ryan, regarding Mantle performance back in the R9 285 review (http://www.anandtech.com/show/8460/amd-radeon-r9-2... you wrote that AMD stated the issue with performance regression was that developers had not yet optimized for Tonga's newer architecture. While here you state that the performance regression is due to AMD having not optimized on the driver side. What is the actual case? What is the actual weighting given these three categories? -
    Hardware Driver
    API
    Software/Game

    What I'm wondering is if we make an assumption that upcoming low level APIs will have similar behavior as Mantle what will happen going forward as more GPU architectures are introduced and newer games are introduced? If the onus shifts especially heavily towards the software side it it seems more realistic in practice that developers will have much more narrower scope in which optimize for.

    I'm wondering if Anandtech could possibly look more indept into this issue as to how it pertains to the move towards low level APIs used in the future as it could have large implications in terms of the software/hardware support relationship going forward.
  • Ryan Smith - Thursday, July 2, 2015 - link

    "What is the actual case? What is the actual weighting given these three categories? -"

    Right now the ball appears to be solidly in AMD's court. They are taking responsibility for the poor performance of certain Mantle titles on R9 Fury X.

    As it stands I hesitate to read into this too much for DX12/Vulkan. Those are going to be finalized, widely supported APIs, unlike Mantle which has gone from production to retirement in the span of just over a year.
  • limitedaccess - Thursday, July 2, 2015 - link

    Thanks for the response. I guess we will see more for certain as time moves on.

    My concern is if lower level APIs require more architecture specific optimizations and the burden is shifted to developers in practice that will cause some rather "interesting" implications.

    Also what would be of interest is how much of reviewers test suites will still look at DX11 performance as a possible fallback should this become a possible issue.

Log in

Don't have an account? Sign up now