Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

Starting with voltages, at least for the time being we have nothing to report on R9 Fury X as far as voltages go. AMD is either not exposing voltages in their drivers, or our existing tools (e.g. MSI Afterburner) do not know how to read the data, and as a result we cannot see any of the voltage information at this time.

Radeon R9 Fury X Average Clockspees
Game R9 Fury X
Max Boost Clock 1050MHz
Battlefield 4
1050MHz
Crysis 3
1050MHz
Mordor
1050MHz
Civilization: BE
1050MHz
Dragon Age
1050MHz
Talos Principle
1050MHz
Far Cry 4
1050MHz
Total War: Attila
1050MHz
GRID Autosport
1050MHz
Grand Theft Auto V
1050MHz
FurMark
985MHz

Jumping straight to average clockspeeds then, with an oversized cooler and a great deal of power headroom, the R9 Fury X has no trouble hitting and sustaining its 1050MHz boost clockspeed throughout every second of our benchmark runs. The card was designed to be the pinnacle of Fiji cards, and ensuring it always runs at a high clockspeed is one of the elements in doing so. The lack of throttling means there’s really little to talk about here, but it sure gets results.

Idle Power Consumption

Idle power does not start things off especially well for the R9 Fury X, though it’s not too poor either. The 82W at the wall is a distinct increase over NVIDIA’s latest cards, and even the R9 290X. On the other hand the R9 Fury X has to run a CLLC rather than simple fans. Further complicating factors is the fact that the card idles at 300MHz for the core, but the memory doesn’t idle at all. HBM is meant to have rather low power consumption under load versus GDDR5, but one wonders just how that compares at idle.

Load Power Consumption - Crysis 3

Switching to load power consumption, we go first with Crysis 3, our gaming load test. Earlier in this article we discussed all of the steps AMD took to rein in on power consumption, and the payoff is seen here. Equipped with an R9 Fury X, our system pulls 408W at the wall, a significant amount of power, but only 20W at the wall more than the same system with a GTX 980 Ti. Given that the R9 Fury X’s framerates trail the GTX 980 Ti here, this puts AMD’s overall energy efficiency in a less-than-ideal spot, but it’s not poor either, especially compared to the R9 290X. Power consumption has essentially stayed put while performance has gone up 35%+.

On a side note, as we mentioned in our architectural breakdown, the amount of power this card draws will depend on its temperature. 408W at the wall at 65C is only 388W at the wall at 40C, as current leakage scales with GPU temperature. Ultimately the R9 Fury X will trend towards 65C, but it means that early readings can be a bit misleading.

Load Power Consumption - FurMark

As for FurMark, what we find is that power consumption at the wall is much higher, which far be it from being a problem for AMD, proves that the R9 Fury X has much greater thermal and electrical limits than the R9 290X, or the NVIDIA completion for that matter. AMD ultimately does throttle the R9 Fury X here, at around 985MHz, but the card easily draws quite a bit of power and dissipates quite a bit of heat in the process. If the card had a gaming scenario that called for greater power consumption – say BIOS modded overclocking – then these results paint a favorable picture.

Idle GPU Temperature

Moving on to temperatures, the R9 Fury X starts off looking very good. Even at minimum speeds the pump and radiator leads to the Fiji GPU idling at just 27C, cooler than anything else on this chart. More impressive still is the R9 290X comparison, where the R9 Fury X is some 15C cooler, and this is just at idle.

Load GPU Temperature - Crysis 3

Loading up a game, after a good 10 minutes or so the R9 Fury X finally reaches its equilibrium temperature of 65C. Though the default target GPU temperature is 75C, it’s at 65C that the card finally begins to ramp up the fan in order to increase cooling performance. The end result is that the card reaches equilibrium at this point and in our experience should not exceed this temperature.

Compared to the NVIDIA cards, this is an 18C advantage in AMD’s favor. GPU temperatures are not everything – ultimately it’s fan speed and noise we’re more interested in – but for AMD GPU temperatures are an important component of controlling GPU power consumption. By keeping the Fiji GPU at 65C AMD is able to keep leakage power down, and therefore energy efficiency up. R9 Fury X would undoubtedly fare worse in this respect if it got much warmer.

Finally, it’s once again remarkable to compare the R9 Fury X to the R9 290X. With the former AMD has gone cool to keep power down, whereas with the latter AMD went hot to improve cooling efficiency. As a result the R9 Fury X is some 29C cooler than the R9 290. One can only imagine what that has done for leakage.

Load GPU Temperature - FurMark

The situation is more or less the same under FurMark. The NVIDIA cards are set to cap at 83C, and the R9 Fury X is set to cap at 65C. This is regardless of whether it’s a game or a power virus like FurMark.

Idle Noise Levels

Last but not least, we have noise. Starting with idle noise, as we mentioned in our look at the build quality of the R9 Fury X, the card’s cooler is effective under load, but a bit of a liability at idle. The use of a pump brings with it pump noise, and this drives up idle noise levels by around 4dB. 41.5dB is not too terrible for a closed case, and it’s not an insufferable noise, but HTPC users will want to be weary. This if anything makes a good argument for looking forward to the R9 Nano.

Load Noise Levels - Crysis 3

Because the R9 Fury X starts out a bit loud due to pump noise, the actual noise increase under load is absolutely miniscule. The card tops out at 19% fan speed, 4% (or about 100 RPM) over its default fan speed of 15%. As a result we measure an amazing 43dB under load. For a high performance video card. For a high performance card within spitting distance of NVIDIA’s flagship and one of the best air cooled video cards of all time.

These results admittedly were not unexpected – one need only look at the R9 295X2 to get an idea of what a CLLC could do for noise – but they are none the less extremely impressive. Most midrange cards are louder than this despite offering a fraction of the R9 Fury X’s gaming performance, which puts the R9 Fury X at a whole new level for load noise from a high performance video card.

Load Noise Levels - FurMark

The trend continues under FurMark. The fan speed ramps up quite a bit further here thanks to the immense load from FurMark, but the R9 Fury X still perseveres. 46.7 dB(A) is once again better than a number of mid-range video cards, never mind the other high-end cards in this roundup. The R9 Fury X is dissipating 330W of heat and yet it’s quieter than the GTX 980 at half that heat, and around 6 dB(A) quieter than the 250W GM200 cards.

There really isn’t enough nice things I can say about the R9 Fury X’s cooler. AMD took the complaints about the R9 290 series to heart, and produced something that wasn’t just better than their previous attempt, but a complete inverse of their earlier strategy. The end result is that the R9 Fury X is well near whisper quiet under gaming, and only a bit louder under even the worst case scenario. This is a remarkable change, and one that ears everywhere will appreciate.

That said, the mediocre idle noise showing will undoubtedly dog the R9 Fury X in some situations. For most cases it will not be an issue, but it does close some doors on ultra-quiet setups. The R9 Fury X in that respect is merely very, very quiet.

Compute Overclocking
Comments Locked

458 Comments

View All Comments

  • mikato - Tuesday, July 7, 2015 - link

    Wow very interesting, thanks bugsy. I hope those guys at the various forums can work out the details and maybe a reputable tech reviewer will take a look.
  • OrphanageExplosion - Saturday, July 4, 2015 - link

    I'm still a bit perplexed about how AMD gets an absolute roasting for CrossFire frame-pacing - which only impacted a tiny amount of users - while the sub-optimal DirectX 11 driver (which will affect everyone to varying extents in CPU-bound scenarios) doesn't get anything like the same level of attention.

    I mean, AMD commands a niche when it comes to the value end of the market, but if you're combining a budget CPU with one of their value GPUs, chances are that in many games you're not going to see the same kind of performance you see from benchmarks carried out on mammoth i7 systems.

    And here, we've reached a situation where not even the i7 benchmarking scenario can hide the impact of the driver on a $650 part, hence the poor 1440p performance (which is even worse at 1080p). Why invest all that R&D, time, effort and money into this mammoth piece of hardware and not improve the driver so we can actually see what it's capable of? Is AMD just sitting it out until DX12?
  • harrydr - Saturday, July 4, 2015 - link

    With the black screen problem of r9 graphic cards not easy to support amd.
  • Oxford Guy - Saturday, July 4, 2015 - link

    Because lying to customers about VRAM performance, ROP count, and cache size is a far better way to conduct business.

    Oh, and the 970's specs are still false on Nvidia's website (claims 224 GB/s but that is impossible because of the 28 GB/s partition and the XOR contention — the more the slow partition is used the closer the other partition can get to the theoretical speed of 224 but the more it's used the more the faster partition is slowed by the 28 GB/s sloth — so a catch-22).

    It's pretty amazing that Anandtech came out with a "Correcting the Specs" article but Nvidia is still claiming false numbers on their website.
  • Peichen - Monday, July 6, 2015 - link

    And yet 970 is still faster. Nvidia is more efficient with resources than they let people on.
  • Oxford Guy - Thursday, July 9, 2015 - link

    The XOR contention and 28 GB/s sure is efficiency. If only the 8800 GT could have had VRAM that slow back in 2007.
  • Gunbuster - Saturday, July 4, 2015 - link

    Came for the chizow, was not disappointed.
  • chizow - Monday, July 6, 2015 - link

    :)
  • madwolfa - Saturday, July 4, 2015 - link

    "Throw a couple of these into a Micro-ATX SFF PC, and it will be the PSU, not the video cards, that become your biggest concern".

    I think the biggest concern here would be to fit a couple of 120mm radiators.
  • TheinsanegamerN - Saturday, July 4, 2015 - link

    My current Micro-ATX case has room for dual 120mm rads and a 240mm rad. plenty of room there

Log in

Don't have an account? Sign up now