Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

Starting with voltages, at least for the time being we have nothing to report on R9 Fury X as far as voltages go. AMD is either not exposing voltages in their drivers, or our existing tools (e.g. MSI Afterburner) do not know how to read the data, and as a result we cannot see any of the voltage information at this time.

Radeon R9 Fury X Average Clockspees
Game R9 Fury X
Max Boost Clock 1050MHz
Battlefield 4
1050MHz
Crysis 3
1050MHz
Mordor
1050MHz
Civilization: BE
1050MHz
Dragon Age
1050MHz
Talos Principle
1050MHz
Far Cry 4
1050MHz
Total War: Attila
1050MHz
GRID Autosport
1050MHz
Grand Theft Auto V
1050MHz
FurMark
985MHz

Jumping straight to average clockspeeds then, with an oversized cooler and a great deal of power headroom, the R9 Fury X has no trouble hitting and sustaining its 1050MHz boost clockspeed throughout every second of our benchmark runs. The card was designed to be the pinnacle of Fiji cards, and ensuring it always runs at a high clockspeed is one of the elements in doing so. The lack of throttling means there’s really little to talk about here, but it sure gets results.

Idle Power Consumption

Idle power does not start things off especially well for the R9 Fury X, though it’s not too poor either. The 82W at the wall is a distinct increase over NVIDIA’s latest cards, and even the R9 290X. On the other hand the R9 Fury X has to run a CLLC rather than simple fans. Further complicating factors is the fact that the card idles at 300MHz for the core, but the memory doesn’t idle at all. HBM is meant to have rather low power consumption under load versus GDDR5, but one wonders just how that compares at idle.

Load Power Consumption - Crysis 3

Switching to load power consumption, we go first with Crysis 3, our gaming load test. Earlier in this article we discussed all of the steps AMD took to rein in on power consumption, and the payoff is seen here. Equipped with an R9 Fury X, our system pulls 408W at the wall, a significant amount of power, but only 20W at the wall more than the same system with a GTX 980 Ti. Given that the R9 Fury X’s framerates trail the GTX 980 Ti here, this puts AMD’s overall energy efficiency in a less-than-ideal spot, but it’s not poor either, especially compared to the R9 290X. Power consumption has essentially stayed put while performance has gone up 35%+.

On a side note, as we mentioned in our architectural breakdown, the amount of power this card draws will depend on its temperature. 408W at the wall at 65C is only 388W at the wall at 40C, as current leakage scales with GPU temperature. Ultimately the R9 Fury X will trend towards 65C, but it means that early readings can be a bit misleading.

Load Power Consumption - FurMark

As for FurMark, what we find is that power consumption at the wall is much higher, which far be it from being a problem for AMD, proves that the R9 Fury X has much greater thermal and electrical limits than the R9 290X, or the NVIDIA completion for that matter. AMD ultimately does throttle the R9 Fury X here, at around 985MHz, but the card easily draws quite a bit of power and dissipates quite a bit of heat in the process. If the card had a gaming scenario that called for greater power consumption – say BIOS modded overclocking – then these results paint a favorable picture.

Idle GPU Temperature

Moving on to temperatures, the R9 Fury X starts off looking very good. Even at minimum speeds the pump and radiator leads to the Fiji GPU idling at just 27C, cooler than anything else on this chart. More impressive still is the R9 290X comparison, where the R9 Fury X is some 15C cooler, and this is just at idle.

Load GPU Temperature - Crysis 3

Loading up a game, after a good 10 minutes or so the R9 Fury X finally reaches its equilibrium temperature of 65C. Though the default target GPU temperature is 75C, it’s at 65C that the card finally begins to ramp up the fan in order to increase cooling performance. The end result is that the card reaches equilibrium at this point and in our experience should not exceed this temperature.

Compared to the NVIDIA cards, this is an 18C advantage in AMD’s favor. GPU temperatures are not everything – ultimately it’s fan speed and noise we’re more interested in – but for AMD GPU temperatures are an important component of controlling GPU power consumption. By keeping the Fiji GPU at 65C AMD is able to keep leakage power down, and therefore energy efficiency up. R9 Fury X would undoubtedly fare worse in this respect if it got much warmer.

Finally, it’s once again remarkable to compare the R9 Fury X to the R9 290X. With the former AMD has gone cool to keep power down, whereas with the latter AMD went hot to improve cooling efficiency. As a result the R9 Fury X is some 29C cooler than the R9 290. One can only imagine what that has done for leakage.

Load GPU Temperature - FurMark

The situation is more or less the same under FurMark. The NVIDIA cards are set to cap at 83C, and the R9 Fury X is set to cap at 65C. This is regardless of whether it’s a game or a power virus like FurMark.

Idle Noise Levels

Last but not least, we have noise. Starting with idle noise, as we mentioned in our look at the build quality of the R9 Fury X, the card’s cooler is effective under load, but a bit of a liability at idle. The use of a pump brings with it pump noise, and this drives up idle noise levels by around 4dB. 41.5dB is not too terrible for a closed case, and it’s not an insufferable noise, but HTPC users will want to be weary. This if anything makes a good argument for looking forward to the R9 Nano.

Load Noise Levels - Crysis 3

Because the R9 Fury X starts out a bit loud due to pump noise, the actual noise increase under load is absolutely miniscule. The card tops out at 19% fan speed, 4% (or about 100 RPM) over its default fan speed of 15%. As a result we measure an amazing 43dB under load. For a high performance video card. For a high performance card within spitting distance of NVIDIA’s flagship and one of the best air cooled video cards of all time.

These results admittedly were not unexpected – one need only look at the R9 295X2 to get an idea of what a CLLC could do for noise – but they are none the less extremely impressive. Most midrange cards are louder than this despite offering a fraction of the R9 Fury X’s gaming performance, which puts the R9 Fury X at a whole new level for load noise from a high performance video card.

Load Noise Levels - FurMark

The trend continues under FurMark. The fan speed ramps up quite a bit further here thanks to the immense load from FurMark, but the R9 Fury X still perseveres. 46.7 dB(A) is once again better than a number of mid-range video cards, never mind the other high-end cards in this roundup. The R9 Fury X is dissipating 330W of heat and yet it’s quieter than the GTX 980 at half that heat, and around 6 dB(A) quieter than the 250W GM200 cards.

There really isn’t enough nice things I can say about the R9 Fury X’s cooler. AMD took the complaints about the R9 290 series to heart, and produced something that wasn’t just better than their previous attempt, but a complete inverse of their earlier strategy. The end result is that the R9 Fury X is well near whisper quiet under gaming, and only a bit louder under even the worst case scenario. This is a remarkable change, and one that ears everywhere will appreciate.

That said, the mediocre idle noise showing will undoubtedly dog the R9 Fury X in some situations. For most cases it will not be an issue, but it does close some doors on ultra-quiet setups. The R9 Fury X in that respect is merely very, very quiet.

Compute Overclocking
Comments Locked

458 Comments

View All Comments

  • chizow - Friday, July 3, 2015 - link

    Pretty much, AMD supporters/fans/apologists love to parrot the meme that Intel hasn't innovated since original i7 or whatever, and while development there has certainly slowed, we have a number of 18 core e5-2699v3 servers in my data center at work, Broadwell Iris Pro iGPs that handily beat AMD APU and approach low-end dGPU perf, and ultrabooks and tablets that run on fanless 5W Core M CPUs. Oh, and I've upgraded also managed to find meaningful desktop upgrades every few years for no more than $300 since Core 2 put me back in Intel's camp for the first time in nearly a decade.
  • looncraz - Friday, July 3, 2015 - link

    None of what you stated is innovation, merely minor evolution. The core design is the same, gaining only ~5% or so IPC per generation, same basic layouts, same basic tech. Are you sure you know what "innovation" means?

    Bulldozer modules were an innovative design. A failure, but still very innovative. Pentium Pro and Pentium 4 were both innovative designs, both seeking performance in very different ways.

    Multi-core CPUs were innovative (AMD), HBM is innovative (AMD+Hynix), multi-GPU was innovative (3dfx), SMT was innovative (IBM, Alpha), CPU+GPU was innovative (Cyrix, IIRC)... you get the idea.

    Doing the exact same thing, more or less the exact same way, but slightly better, is not innovation.
  • chizow - Sunday, July 5, 2015 - link

    Huh? So putting Core level performance in a passive design that is as thin as a legal pad and has 10 hours of battery life isn't innovation?

    Increasing iGPU performance to the point it not only provides top-end CPU performance, and close to dGPU performance, while convincingly beating AMD's entire reason for buying ATI, their Fusion APUs isn't innovation?

    And how about the data center where Intel's *18* core CPUs are using the same TDP and sockets, in the same U rack units as their 4 and 6 core equivalents of just a few years ago?

    Intel is still innovating in different ways, that may not directly impact the desktop CPU market but it would be extremely ignorant to claim they aren't addressing their core growth and risk areas with new and innovative products.

    I've bought more Intel products in recent years vs. prior strictly because of these new innovations that are allowing me to have high performance computing in different form factors and use cases, beyond being tethered to my desktop PC.
  • looncraz - Friday, July 3, 2015 - link

    Show me intel CPU innovations since after the pentium 4.

    Mind you, innovations can be failures, they can be great successes, or they can be ho-hum.

    P6->Core->Nehalem->Sandy Bridge->Haswell->Skylake

    The only changes are evolutionary or as a result of process changes (which I don't consider CPU innovations).

    This is not to say that they aren't fantastic products - I'm rocking an i7-2600k for a reason - they just aren't innovative products. Indeed, nVidia's Maxwell is a wonderfully designed and engineered GPU, and products based on it are of the highest quality and performance. That doesn't make them innovative in any way. Nothing technically wrong with that, but I wonder how long before someone else came up with a suitable RAM just for GPUs if AMD hadn't done it?
  • chizow - Sunday, July 5, 2015 - link

    I've listed them above and despite slowing the pace of improvements on the desktop CPU side you are still looking at 30-45% improvement clock for clock between Nehalem and Haswell, along with pretty massive improvements in stock clock speed. Not bad given they've had literally zero pressure from AMD. If anything, Intel dominating in a virtual monopoly has afforded me much cheaper and consistent CPU upgrades, all of which provided significant improvements over the previous platform:

    E6600 $284
    Q6600 $299
    i7 920 $199!
    i7 4770K $229
    i7 5820K $299

    All cheaper than the $450 AMD wanted for their ENTRY level Athlon 64 when they finally got the lead over Intel, which made it an easy choice to go to Intel for the first time in nearly a decade after AMD got Conroe'd in 2006.
  • silverblue - Monday, July 6, 2015 - link

    I could swear that you've posted this before.

    I think the drop in prices were more of an attempt to strangle AMD than anything else. Intel can afford it, after all.
  • chizow - Monday, July 6, 2015 - link

    Of course I've posted it elsewhere because it bears repeating, the nonsensical meme AMD fanboys love to parrot about AMD being necessary for low prices and strong competition is a farce. I've enjoyed unparalleled stability at a similar or higher level of relative performance in the years that AMD has become UNCOMPETITIVE in the CPU market. There is no reason to expect otherwise in the dGPU market.
  • zoglike@yahoo.com - Monday, July 6, 2015 - link

    Really? Intel hasn't innovated? I really hope you are trolling because if you believe that I fear for you.
  • chizow - Thursday, July 2, 2015 - link

    Let's not also discount the fact that's just stock comparisons, once you overclock the cards as many are interested in doing in this $650 bracket, especially with AMD's clams Fury X is an "Overclocker's Dream", we quickly see the 980Ti cannot be touched by Fury X, water cooler or not.

    Fury X wouldn't have been the failure it is today if not for AMD setting unrealistic and ultimately, unattained expectations. 390X WCE at $550-$600 and its a solid alternative. $650 new "Premium" Brand that doesn't OC at all, has only 4GB, has pump whine issues and is slower than Nvidia's same priced $650 980Ti that launched 3 weeks before it just doesn't get the job done after AMD hyped it from the top brass down.
  • andychow - Thursday, July 2, 2015 - link

    Yeah, "Overclocker's dream", only overclocks by 75 MHz. Just by that statement, AMD has totally lost me.

Log in

Don't have an account? Sign up now