Power, Temperatures, & Noise

Last, but not least of course, is our look at power, temperatures, and noise levels. While a high performing card is good in its own right, an excellent card can deliver great performance while also keeping power consumption and the resulting noise levels in check.

Radeon Video Card Voltages
5600 XT Max 5700 Max 5600 XT Idle 5700 Idle
0.977v 1.025v 0.775v 0.775v

Interestingly, even with the BIOS update and AMD’s voltage-frequency curve extension, the voltages being used by our Sapphire Radeon RX 5600 XT are quite tame. The card never goes higher than 0.977v, which is almost 0.05v lower than the Radeon RX 5700, itself already a good deal lower than the full-fat Navi 10-based Radeon RX 5700 XT. While third-tier cards like the RX 5600 XT are uncommon, when we do see them they are normally running higher voltage (leakier) parts, and this is what I was expecting for AMD’s new part.

No wonder AMD is talking up the power efficiency of the card; even with its restricted clockspeeds, not going above 1.0v helps to ensure that power efficiency doesn’t take a dive by paying a massive power penalty to access the last few MHz worth of headroom.

Radeon Video Card Average Clockspeeds
(Rounded to the Nearest 10MHz)
Game Sapphire 5600 XT (Perf) 5600 XT 5700
Max Boost Clock 1760MHz 1670MHz 1750MHz
Official Game Clock 1615MHz 1375MHz 1625MHz
Tomb Raider 1700MHz 1610MHz 1680MHz
F1 2019 1700MHz 1610MHz 1650MHz
Assassin's Creed 1700MHz 1660MHz 1700MHz
Metro Exodus 1700MHz 1600MHz 1640MHz
Strange Brigade 1740MHz 1660MHz 1660MHz
Total War: TK 1740MHz 1660MHz 1690MHz
The Division 2 1700MHz 1610MHz 1630MHz
Grand Theft Auto V 1720MHz 1640MHz 1690MHz
Forza Horizon 4 1730MHz 1650MHz 1700MHz

Shifting over to clockspeeds, things look very good for the RX 5600 XT. The card’s clockspeeds are remarkably consistent, and this comes down to the fact that the card is rarely ever entirely power-bound. Rather, the card is running out of room on the voltage-frequency curve, making it very easy to get close to its peak clockspeeds in the process. This goes hand-in-hand with the relatively low voltage, allowing the card to run rather efficiently and avoid heavier power throttling.

Unsurprisingly, the card is closer to its peak when running at AMD’s reference clockspeeds than it is the default factory overclocked mode. Still, the latter sees the card average clockspeeds higher than the RX 5700, underscoring how these factory overclocked cards are primed to hit high clockspeeds, and that it’s going to be the lack of memory bandwidth that ultimately keeps them chasing the RX 5700.

Idle Power Consumption

Load Power Consumption - Shadow of the Tomb Raider

Load Power Consumption - FurMark

Taking a look at power consumption, we again find a good showing from Sapphire and AMD. The card’s idle power consumption shaves off a couple of watts at the wall relative to the RX 5700, thanks to the lower amount of VRAM and fully idled fans.

Meanwhile load power consumption also fares comparatively well. At reference clocks, we see AMD’s claims of higher power efficiency first-hand; measured at the wall, the card draws around 30W less than the RX 5700. Equally important, this keeps it relatively close to the GeForce GTX 1660 series. And while the RX 5600 XT ultimately ends up drawing more power, as we’ve seen it also handily outperforms those cards.

Sapphire factory overclock, however, is both a curse and a blessing. The blessing is that it improves the card’s performance to RTX 2060-levels, but the curse is that it does so while pushing power consumption to near RX 5700-levels. So when running at full tilt, the Pulse RX 5600 XT is less power efficient than the RX 5700, owing to its overall lower performance. Then again, it’s not like NVIDIA was doing very well to begin with. As a result, even when factory overclocked, the Pulse ends up drawing a bit less power than the nearest NVIDIA card.

I will also quickly note that the delta in power consumption between FurMark and Tomb Raider is higher for the RX 5600 XT than we’ve seen it in other Navi cards. All told, even at 1440p with the highest available settings, Tomb Raider is having a hard time keeping the RX 5600 XT busy enough that the card is running at maximum clockspeeds. This has made Tomb Raider turn into something of a best case scenario, as the card gets to idle a little bit.

Idle GPU Temperature

Load GPU Temperature - Shadow of the Tomb Raider

Load GPU Temperature - FurMark

Moving on to temperatures, the large card has no problem cooling itself. In quiet mode the card never passes 68C, and even in full-on performance mode, the temperature maxes out at 74C. From a temperature perspective, Sapphire seems to have just about perfectly tuned the card’s cooler.

Idle Noise Levels

Load Noise Levels - Shadow of the Tomb Raider

Load Noise Levels - FurMark

Finally, with noise, we get a chance to see how quietly the oversized Pulse can operate. And the answer is “very quietly”. At idle the card is entirely silent, thanks to its zero fan speed idle mode. Meanwhile with the quiet mode BIOS clocked at AMD’s reference clockspeeds, the fans only have to go to the card’s bare minimum fan speed – 25 percent – to keep the card cool, even under FurMark. To put this in context, we’re looking at a 150W card that’s having all of its cooling needs being met by a pair of fans running at a mere 750 RPM, which is a fraction of what they can actually run at.

Even with the factory overclock active, the cooler has no problem keeping up. Load noise for the 180W mode peaks at 41.2dB(A), which is close to silent and a great result overall. So for as much ribbing as I give Sapphire for the somewhat absurd size of the card, there’s no arguing with the effectiveness of the cooler. It’s quieter than almost every other card in this chart, including the GeForce GTX 1660 series cards and NVIDIA’s own reference RTX 2060.

Synthetics Closing Thoughts
Comments Locked

202 Comments

View All Comments

  • Ryan Smith - Thursday, January 23, 2020 - link

    Unfortunately Blender doesn't play nicely with new hardware. Or with AMD's currently buggy OpenCL drivers.
  • ozzuneoj86 - Wednesday, January 22, 2020 - link

    The stagnation in the sub-$300 video card market is getting pretty tiresome. I was unimpressed when the GTX 1060 6GB came out in 2016 and was barely faster than the GTX 970 from 2014 (which I bought new in early 2015 for around $250 on sale). Now, 3 1/2 years later we're getting only marginally faster products in the low $200 price range (1660, 5500xt). If you already have a card that was in the $200-$250 price range any time within the past *5 years*, you have to spend $280-$300 to get any kind of noticeable upgrade

    As a comparison, that'd be like if the GTX 970 I bought on sale for $250 in 2015 (an admittedly great price, but not unheard of) had performed no better than a GTX 460... or even a GTX 470. That sounds absurd now, and yet that's what the mid range market has turned into.
  • philosofool - Wednesday, January 22, 2020 - link

    This seems like a strange analysis to me. This card is a legit entry level 1440p card, which has never existed in the sub-$300 range before.
  • cmdrmonkey - Wednesday, January 22, 2020 - link

    nVidia is charging more and giving us less than they ever have in the past because they have no meaningful competition from AMD.
  • Spunjji - Thursday, January 23, 2020 - link

    It's true that Nvidia haven't offered anything like the value proposition that the GTX 970 was on its launch since then, and things have definitely slowed down in the GPU arena. I'm not entirely on board with this criticism overall, though.

    First off, it's a bit unfair to compare the price of a card you got on sale with launch pricing. The 970 launched at $330, which was an absolute steal but still more than $250.
    Second, the 1060 provided performance that was better than a GTX 980 (and about 20-40% better than a 970, depending on the game and resolution) for $250. AMD countered with the 580 and, well, to be fair that was pretty much that until now.

    That's why it confuses me that you'd complain now, when the 5600XT (and the price drops it inspired) means we can *finally* get performance that's 50-100% better than the 970 at a lower price. It took about twice as long as it used to, for sure, and that just seems to be how things are now.
  • ozzuneoj86 - Thursday, January 23, 2020 - link

    Sorry, I wasn't trying to make an unfair comparison. I was just thinking more in terms of time... 5 years, which used to be an incredibly long time in this industry. If we're comparing launch dates and pricing, then it has taken six years to get a large upgrade for a GTX 970 at a lower price... though arguably the RX 5700 fit that bill last summer when it was often available on sale for $300 or a little less. To me, that makes the 5600XT with less memory a lot less interesting for only $20 less. These cards are fine if people have the money for them, but the slow progress is what is getting to me. Compared to the massive changes we've seen AMD bring about in the CPU market, the GPU market is very stale. There aren't any no-brainer purchases at any tier if you have a mid-range GPU from within the past 5 years. This is probably the closest we've come, as you said, but its by such a small margin. If we had performance like this for closer to $200 it would have shaken things up and made GPUs interesting again. Instead, we have the same back and forth about whether it's worth it to spend another $20 and get last year's 2060, or to buy a 4 year old used 1070 for $190 on eBay, or to simply lower the settings a couple notches and stick to the 6 year old GTX 970.

    This isn't really relevant, but... I guess my 970 actually ended up being more like $220, because I got a $30 check from nvidia due to that memory settlement. And then, well, I did sell the DLC codes that came with the card so it was closer to $200. That ends up being like $40 per year... thanks nvidia! :P
  • peevee - Wednesday, January 22, 2020 - link

    Ryan, because you mention all the time that 6GB of VRAM might not be enough soon, can you write an article explaining the major uses of VRAM by various applications?

    It seems like neither compressed textures nor 3d models of everything needed at the same time (or within a few seconds) could take as much, and everything else can be preloaded quickly on the fly, especially with PCIe4x16... as it allows to update half of that 6GB VRAM every 1/10th of a second.
  • cmdrmonkey - Wednesday, January 22, 2020 - link

    Looks okay, but nobody is going to buy it because nobody actually buys AMD video cards. If you doubt this look at the Steam hardware survey.
  • Korguz - Thursday, January 23, 2020 - link

    " look at the Steam hardware survey. " and that is 100% reliable ? BS. not every one has and uses steam, so no.. NOT a reliable metric. those that i know.. dont all run nvidia cards, some have radeons, and they dont have steam...
  • cmdrmonkey - Thursday, January 23, 2020 - link

    Steam has over 1 billion accounts and 90 million monthly users. The hardware survey has a sample size in the millions. Medical and psychological studies don't even have sample sizes like that. I'd say it's a damn good indicator of what most people are using.

Log in

Don't have an account? Sign up now