Power, Temperature, & Noise

Last, but not least of course, is our look at power, temperatures, and noise levels. While a high performing card is good in its own right, an excellent card can deliver great performance while also keeping power consumption and the resulting noise levels in check.

Radeon Video Card Voltages
5700 Max 5500 XT Max 5700 Idle 5500 XT Idle
1.025v 1.141v 0.775v 0.700v

Back when the RX 5700 series launched, AMD’s voltages surprised me; the RX 5700 XT went as high as 1.2v on TSMC’s 7nm process. For better or worse, it looks like those voltages aren’t a fluke, as we see high voltages with the RX 5500 XT as well. In this case the card tops out at 1.141v, a not insubstantial decrease from the RX 5700 XT, though it’s still relatively high. AMD’s GPUs are still the only high-throughput GPU-like product we’ve seen voltages for on this process, so it’s hard to say whether this is a TSMC thing or an AMD thing. But either way, as AMD’s own voltage/frequency curve helpfully illustrates, the last couple of hundred MHz on the RX 5500 XT gets to be quite expensive in terms of power.

Radeon Video Card Average Clockspeeds
(Rounded to the Nearest 10MHz)
Game 5500 XT 5700
Max Boost Clock 1860MHz 1750MHz
Official Game Clock 1717MHz 1625MHz
Tomb Raider 1810MHz 1680MHz
F1 2019 1810MHz 1650MHz
Assassin's Creed 1750MHz 1700MHz
Metro Exodus 1800MHz 1640MHz
Strange Brigade 1840MHz 1660MHz
Total War: TK 1840MHz 1690MHz
The Division 2 1800MHz 1630MHz
Grand Theft Auto V 1830MHz 1690MHz
Forza Horizon 4 1830MHz 1700MHz

Despite that power cost, however, the RX 5500 XT manages to keep its clockspeeds rather high. Even without Sapphire’s higher power cap performance BIOS, their 8GB card is frequently at 1800MHz or better, putting it well ahead of AMD’s official game clock of 1717MHz. This means the card is running fairly close to its clockspeed limit – so Sapphire’s extra power doesn’t do a whole lot – but it also means the card is doing all of this on 130W (or less) of power.

Idle Power Consumption

Load Power Consumption - Shadow of the Tomb Raider

Load Power Consumption - FurMark

With the combination of TSMC’s 7nm process, AMD’s firmware optimizations, and I suspect the use of just 8 PCIe lanes, the RX 5500 XT fares very well when it comes to idle power. At 50W for the entire system, this is lower than any other configuration by a few watts. Which for idling, where power consumption is already low, is huge. No wonder Sapphire is able to offer zero fan speed idle here; the card is burning very little power at idle.

Similarly, load power is looking fairly good as well. Under Tomb Raider, the total system power consumption with the AMD cards is highly competitive with the NVIDIA competition (though as we’ve seen, actual game framerates trail a bit). Though AMD does fall behind under FurMark, as the 130W+ RX 5500 XT cards all have higher TDPs than NVIDIA’s 120W/125W equivalents, and FurMark will drive all of these cards to their power limits.

In practice, all of this generally reflects the cards’ relative specifications. The RX 5500 XT is able to hang with the somewhat inefficient GTX 1650 Super, however once we get to the more efficient GTX 1660, NVIDIA is consuming less power while delivering better performance.

Idle GPU Temperature

Load GPU Temperature - Shadow of the Tomb Raider

Load GPU Temperature - FurMark

Early on I mentioned that Sapphire’s Pulse cards might be a bit overbuilt, and now that we’re getting into temperature and noise measurements, we get to see why. The idle GPU temperatures are what we’d expect for a zero fan speed idle card; meanwhile the load temperatures don’t crack 70C under Tomb Raider, and even FurMark only pushes the worst card to a well within tolerances 76C.

Idle Noise Levels

Load Noise Levels - Shadow of the Tomb Raider

Load Noise Levels - FurMark

But when we get to noise, this is where Sapphire blows our socks off. Or rather, doesn’t blow our socks off?

The load noise levels I measured here were so low that it required extra effort to properly duplicate the results and isolate noise sources. With a card TDP of 130W, those big 95mm fans end up doing very little work. The PWM and monitoring-enabled fans run at under 800 RPM on gaming workloads, and it’s only when we’re using Sapphire’s higher TDP performance BIOS that the fans crack 1000 RPM.

Sapphire could probably cool a 200W card with this cooler, and I wouldn’t be too surprised to learn that it’s exactly such a card they took it from. But the net result is that while the card is a space hog, it’s a silent space hog. With load noise levels below 40 dB(A) for everything except FurMark, the card is barely louder than the rest of the system. Compared to our GeForce cards, all of which are smaller cards with equally small fans, the difference is extensive. Sapphire may have overbuilt their card, but as a result they’ve struck a great balance between temperatures and cooling performance, and delivering great acoustics in the process.

Synthetics Closing Thoughts
Comments Locked

97 Comments

View All Comments

  • Valantar - Thursday, December 12, 2019 - link

    What? This class of GPU is in no way whatsoever capable of gaming at 4K. Why include a bunch of tests where the results are in the 5-20fps range? That isn't useful to anyone.
  • Zoomer - Saturday, December 21, 2019 - link

    AT used to include. I just ignored it for a card of this class; probably others did as well.
  • Ravynmagi_ - Thursday, December 12, 2019 - link

    I lean more Nvidia too and I didn't get that impression from the article. I felt it was fair to AMD and Nvidia in it's comparison of the performance and facts. I wasn't bothered by where they decided to cut off their chart.
  • FreckledTrout - Friday, December 13, 2019 - link

    Same here. I don't need to see numbers elucidating how bad these low end cards are at 4k. Let's move on.
  • Dragonstongue - Thursday, December 12, 2019 - link

    I <3 how compute these days adamantly refuse to use the "old standard"
    i.e MINING

    this shows Radeon in vastly different light, as the different forms of such absolutely show difference generation on generation, more so Radeon than Ngreedia err I mean Nvidia.

    seeing as one can take the wee bit of time to have a -pre set that really needs very little change (per brand and per specific GPU being used)

    instead of using "canned" style bechmarks, that often are very much *bias* towards those who hold more market share and/or have the heavier fist to make sure they are shown as "best" even when the full story simply is NOT being fully told...yep am looking direct at INTC/NVDA ... business is business, they certainly walk that BS line constantly, to very damaging consequence for EVERYONE

    ............

    I personally think in this regard, AMD likely would have been "best off" to up the power budget a wee touch, so the "clear choice" between going with older stuff they probably and likely not want to be producing as much anymore (likely costlier) that is RX 4/5xx generation such as the 570-580 more importantly 590, this "little card" would be that much better off, instead, they seem to "adamant" want to target the same limiting factor of limited memory bus size (even though fast VRAM) still wanting to be @ the "claimed golden number" of "sub" $200 price point --- means USA or this price often moves from "acceptable" to, why bother when get older far more potent stuff for either not much more or as of late, about the same (rarely less, though it does happen)

    1080p, I can see this, myself still using a Radeon 7870 on a 144Hz monitor "~3/4" jacked up settings (granted it is not running at full rate as the GPU does not support run this at full speed, but my Ryzen 3600 helps huge.

    still, a wee bit more power budget or something would effectively "bury" or make moot 580 - 590, then wanting to sell for that "golden" $200 price point, would make much more sense, seeing as they launched the 480 - 580 "at same pricing" (for USA) in my mind, and all I have read, with the terrific yields TSMC has managed to get as well as the "reasonable low cost to produce due to very very few "errors" THIS should have targeted 175 200 max.

    They are a business, no doubt, though they in all honesty should have looked at the "logical side" that is, "we know we cannot take down the 1660 super / Ti the way we would like to, while sticking with the shader count / memory bus, so why not say fudge it, add that extra 10w (effectively matching 7870 from many many generations back in the real world usage) so we at least give potential buyers a real hard time to decide between an old GPU (570-580-590) or a brand spanking new one that is very cool running AND not at all same power use, I am sure it will sell like hotcakes, provided we do what we can to make sure buyers everywhere can get this "for the most part" at a guaranteed $200 or less price point, will that not tick our competition right off?"

    ..........
  • thesavvymage - Thursday, December 12, 2019 - link

    What are you even trying to say here.....
  • Valantar - Thursday, December 12, 2019 - link

    I was lost after the first sentence. If it can be called a sentence. I truly have no idea what this rant is about.
  • Fataliity - Thursday, December 12, 2019 - link

    I think the game bundle is what they chose as their selling point. I'm sure they get a good deal with game pass being the supplier of CPU/GPU on xbox. So their bundle is most likely almost free for them. Which pushes the value up. Without bundle I imagine 5500 4gb being 130 and 8gb being 180.
  • TheinsanegamerN - Sunday, December 15, 2019 - link

    That's a LOTTA words just to say "AMD just made another 580 for $20 less, please clap."
  • kpb321 - Thursday, December 12, 2019 - link

    The ~$100ish 570's still look like a great deal as long as they are still available. For raw numbers they have basically the same memory bandwidth and compute as a 5500 but the newer card ends up being slightly faster and uses a bit less power. It is overall more efficient but IMO no where near enough to justify the price premium over the older cards. I'm not as sure that the 570/580 or 5500 will have enough compute power for the 4 vs 8gb of memory to really make a difference but my 570 happens to be an 8gb card anyway.

Log in

Don't have an account? Sign up now