Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

AMD RX Series Video Card Voltages
  Boost Idle
Red Devil RX 580 1.2063v 0.7625v
Radeon RX 580 1.1625v
Radeon RX 480 1.0625v
 
Nitro+ RX 570 1.1625v 0.725v
Radeon RX 570 1.1v
Radeon RX 470 1.0125v

As you can likely infer from the earlier discussion on power consumption and TBPs, in order to reach these higher clockspeeds AMD and their partners had to increase their GPU voltages. Relative to both our RX 480 and RX 470, the differences are quite significant. Overall voltages have increased by around 0.1v when using AMD’s reference clocks, and closer to 0.15v for the full factory overclocks. As a result the highest frequencies on these two cards are very expensive in terms of power, and it explains a great deal about why AMD needed to increase TBPs by 30-35W just to add another 40-80MHz to the boost clock.

On the plus side, idle voltages are down for both RX 500 cards. Our RX 400 series cards idled at 0.8v, whereas the RX 580 and RX 570 idle at 0.7625v and 0.725v respectively. This has a minimal impact on a desktop card (especially wall power measurements), but if this is consistent for all AMD chips, it bodes well for the laptop-focused Polaris 11 and Polaris 12 GPUs.

Moving on, let’s take a look at average clockspeeds. Functionally speaking, AMD’s boost mechanism is closer to a fine-grained throttle mechanism: the card always tries to run at its full, advertised boost clock, and will pull back if there’s not enough power available or it triggers thermal throttling. In practice, both the RX 480 and RX 470 regularly power throttled to a small degree; this allowed AMD to keep the cards closer to their optimal point on the clockspeed/power curve.

Radeon Video Card Average Clockspeeds
Game RD RX 580 RX 580 RX 480 N RX 570 RX 570 RX 470
Tomb Raider
1380MHz
1280MHz
1230MHz
1340MHz
1244MHz
1190Mhz
DiRT Rally
1380MHz
1340MHz
1266MHz
1340MHz
1244MHz
1206MHz
Ashes
1360MHz
1250MHz
1200MHz
1330MHz
1230MHz
1150Mhz
Battlefield 4
1380MHz
1340MHz
1266MHz
1340MHz
1244MHz
1206MHz
Crysis 3
1380MHz
1300MHz
1250MHz
1340MHz
1244MHz
1190Mhz
The Witcher 3
1370MHz
1260MHz
1220MHz
1340MHz
1230MHz
1170Mhz
The Division
1375MHz
1290MHz
1230MHz
1340MHz
1244MHz
1180Mhz
GTA V
1380MHz
1340MHz
1266MHz
1340MHz
1244MHz
1206MHz
Hitman
1365MHz
1250MHz
1200MHz
1330MHz
1230MHz
1130Mhz

Besides supporting higher clockspeeds overall, the higher TBPs of the RX 580 and RX 570 mean that these cards power throttle less often than their predecessors. To be clear, they still throttle, but the average degree of throttling across our game set is lower than with the earlier cards. This means that the RX 580 and RX 570 should be running closer to their maximum clockspeeds more often. It removes a bit of headroom, but it will improve performance.

Adding the fully unlocked factory overclocks into the mix, and we find that throttling is further reduced. The factory overclock BIOSes on these cards have even higher power limits, so even with their higher clockspeeds, they throttle less often. The PowerColor Red Devil RX 580 never averages below 1360MHz in a game, and the Sapphire Nitro+ RX 570 only shaves off all of 10MHz in two of our games. This is also why the factory overclocked cards are as fast as they are; the higher boost clocks are part of the story, but the reduced throttling further boosts performance over the baseline cards.

Idle Power Consumption

After AMD fixed their Polaris idle power driver bug last year, AMD’s idle power numbers have been rather consistent. Earlier Polaris 10 cards averaged around 75W at the wall, and so do these newer generation cards.

Load Power Consumption - Crysis 3

As for load power consumption, this is where AMD pays the piper, so to speak. Roughly in-line with AMD’s TBPs, power consumption at the wall has increased by a bit over 20W for both the RX 580 and RX 570 relative to their predecessors. At this point the RX 570 is approaching 300W, and the RX 580 is just shy of 325W. This puts the power consumption of the RX 570 at 10W under the GeForce GTX 1070, while the RX 580 is 17W above it. It goes without saying both are well above the GTX 1060 cards that AMD is competing with in terms of performance.

Throwing in the factory overclocks further pours on the power. The Nitro+ system needs 330W here, and the Red Devil system 360W, each around 35W more than their reference-clocked configurations. Bear in mind that this is total system power, so part of the increase comes from the higher CPU power consumption that results from higher framerates, but given the limited framerate difference from the factory overclock, the bulk of the power increase here does come from the cards themselves.

Load Power Consumption - FurMark

FurMark gives us a more focused view of GPU power consumption, and it tells a similar tale as Crysis 3. We’re looking at a 32W increase in power at the wall for the RX 570, and a 19W increase for the RX 580, the latter actually being a bit less than I was expecting. The new RX 500 series cards do look better against NVIDIA’s GeForce cards, but as I’ve previously mentioned in other reviews, in this generation FurMark only seems to be consistent between cards from the same GPU vendor. Cross-vendor comparisons are more accurate under Crysis 3.

Meanwhile we get a second point of view for the power consumption of the factory overclocked cards. All-told, the higher factory overclocks cause FurMark power consumption to jump by 40W or so. FurMark is a pathological case of course, so games rarely (if ever) draw the same amount of power, but this shows why the factory overclocked cards don’t throttle as much. In their factory overclocked configurations, both cards have very high power limits. These limits are significantly higher than the original reference RX 480.

Idle GPU Temperature

For idling, both RX 500 cards implement zero fan speed idle. As a result their temperatures are a degree or two warmer than most of the pack. But idle power consumption is so low that these cards have little trouble dissipating that heat with just their heatsinks.

Load GPU Temperature - Crysis 3

Judging from their temperatures under Crysis 3, both the PowerColor and Sapphire cards are tuned for a balance of noise and temperature. The open-air styled cards reach just shy of 70C when underclocked to AMD’s reference clocks, and 75C with their respective factory overclocks. With their massive coolers, neither card has any trouble with this amount of heat, they just aren’t spinning up the fans by too much to keep noise levels down.

Load GPU Temperature - FurMark

With FurMark the story is much the same as Crysis 3. As it turns out, both cards have a 75C soft cap; once the GPU reached that temperature, the fans will further spin up as necessary to keep temperatures from going any higher. It should be noted that neither card appears to temperature throttle, even under FurMark, as the power throttle is more than sufficient.

Idle Noise Levels

Finally with idle noise levels, both cards are silent thanks to their zero fan speed idle implementations. The only noise that’s left comes from the rest of the GPU testbed.

Load Noise Levels - Crysis 3

Moving to load noise, both cards continue to impress. The Sapphire RX 570 card, even when it’s running in its full factory overclock condition, barely gets above the noise floor; it’s dissipating 150W (or more) of heat in near silence. PowerColor’s Red Devil RX 580 fares similarly well; it’s under 40db(A) at AMD’s reference clocks, and only finally hits 42dB(A) when fully factory overclocked. Open air coolers have their strengths and weaknesses, but one thing is for sure: manufacturers have increasingly honed their hardware and fan speed algorithms, and these days are producing consistently awesome results.

Load Noise Levels - FurMark

As for FurMark, noise levels do pick up as you’d expect. When underclocked to AMD’s specifications, both cards are still below 40db(A). It’s only once their factory overclocks and higher power limits kick in that noise starts to become meaningful. The Sapphire RX 570 holds to 42.5dB(A) here, while the Red Devil RX 580 only finally becomes a meaningful source of noise at 48.2dB(A). Though as the Red Devil has a rather high power limit, I doubt it will come anywhere close to this noise level under any gaming workload.

Synthetics Final Thoughts
Comments Locked

129 Comments

View All Comments

  • CiccioB - Thursday, April 20, 2017 - link

    Yes, better in the few DX12 games optimized for AMD architecture. Where it gains at most 10%... yes, a really selling point up to now, until real DX12 games with no ad-hoc AMD optimization will be released making many user wake up from their wet dreams.
  • Outlander_04 - Thursday, April 20, 2017 - link

    Its not optimization its asynchronous compute . The nVidia architecture cant do it and will never be able to keep up in DX12
  • tipoo - Thursday, April 20, 2017 - link

    Define "can't do it". Pascal does async, just not with per-clock interleaving like AMD
  • Outlander_04 - Thursday, April 20, 2017 - link

    Then it is not asynchronous which quite literally means "at the same time".
    AMD's compute strength is well established by the legions of people who wisely use their cards for bitcoin mining .
  • CiccioB - Friday, April 21, 2017 - link

    Async doesn't really mean "at the same time" at all.
    Possibly, the opposite.
  • CiccioB - Thursday, April 20, 2017 - link

    No optimizations?
    Tell me why DICE's engine runs better on AMD GPUs even in DX11 while all other engines do not.
    Async in DX11? A miracle that suddenly allowed AMD drivers to pass nvidia one in draw calls? Better geometry handling? Better memory and bandwith handling?
    Come on. You AMD fanboy are all looking to the first games in (pseudo) DX12 sponsored by AMD. The future ones will be different (maybe also using nvidia functionalities that AMD does not support and not biased on AMD HW.. AMD can't surely support all AAA developer for working more to use Async, which is not a free functionality, did you know? and tune it for all cards) and for the time DX12 will become mainstream Volta will be old.
    But it's nice that you all go and suggest to buy AMD HW. It should make nvidia one cheaper... should in theory,... probably you do not advertise too much as the prices keeps on staying at the high level. Please suggest to buy CrossFire solutions, so that AMD will sell double the HW and all those new AMD customers can enjoy double performance in..ermm... welll... yes, you know, DX12 does not support CF/SLI natively, so they'll happily play DX11 games at nvidia levels with their CF configurations.

    I bet the Async thing you just said was heard from an AMD friend... wasn't it?
  • Outlander_04 - Thursday, April 20, 2017 - link

    Why is game optimization in DX11 in various game engines [ which could favor either AMD or nvidia] of any relevance to me pointing out the strengths of AMD's architecture in DX12?

    Please try and address what is said, not what you want to think is said . Thanks
  • CiccioB - Friday, April 21, 2017 - link

    It's you that is looking at what you want.
    There are 2 scenarios to analyze:
    DX11 and DX12
    You just pick DX12 ignoring DX11 because it is what you want to advertise and to make you own consideration based only on what you want to see.
    I just made you notice that in DX11 the game is well optimized for AMD architecture seen the performances it obtains, performances that with respect to nvidia no other games have ever reached in DX11.
    So you can't dismiss the simple and clear assertion thati it is an AMD optimized game (engine).
    It is and DX11 demonstrates it. What you see in DX12 is what will be if ALL future games will be optimized for AMD architecture this way. Which won't happen. Other games (always supporting DX12) just shows that they can run better on nvidia HW. Both because they do not have all those work payed by AM to make the game run better on AMD HW and because not all games take advantage of the Async compute (which costs in terms of development, did you understand this or you are living in your own world of bunnies and rainbows?)

    So extrapolating that AMD work well in DX12 just by looking at one engine that is created for running better on their HW (and as I said it is a fact seen also in DX11) it is stupid and just demonstrates a pure lie.
  • Mugur - Thursday, April 20, 2017 - link

    I'm sorry to be another one that points out that the testbed is obsolete (the best approach should be 2 testbeds with i7 7700k and R7 1800X or R5 1600X) and it's missing a few new games (Doom, Battlefield 1, etc.).

    About the cards: they are ok-ish, in my opinion. Nothing spectacular, but it's still a refresh, same price or a bit lower than last year, both cool and quiet even factory overclocked. Nobody should care for a few Watts more than 1060 (which was actually warmer and noisier in the tests), as long as they have a decent PSU.

    As an owner of 2 Freesync monitors, I may go for a 580 8 GB to replace my 470 that would go into the kid's PC. After I see Vega, of course. :-)
  • CiccioB - Thursday, April 20, 2017 - link

    "Few watts"
    It uses double the power for the same work!
    And yes, a bit warmer and noisier.. it was the FE with the blower solution. Take a custom card, it will be still faster than this OC over OC sh*t and with use half the power and be much more cool with less than half the noise.

    It is fascinating to try to understand how people can justify certain incomprehensible choices.

Log in

Don't have an account? Sign up now