The NVIDIA GeForce RTX 2080 Super Review: Memories of the Future
by Ryan Smith on July 23, 2019 9:00 AM EST- Posted in
- GPUs
- GeForce
- NVIDIA
- Turing
- GeForce RTX
Power, Temperatures, & Noise
Last, but not least of course, is our look at power, temperatures, and noise levels. While a high performing card is good in its own right, an excellent card can deliver great performance while also keeping power consumption and the resulting noise levels in check.
GeForce Video Card Voltages | |||||
RTX 2080S Boost | RTX 2080S Idle | RTX 2080 Boost | RTX 2070S Boost | ||
1.05v | 0.65v | 1.05v | 1.043v |
Overall, the voltages being used for the RTX 2080 Super are not any different than NVIDIA’s other TU104 cards – or any of their other Turing cards, for that matter. At its highest clockspeeds the card runs at 1.05v, quickly stepping down to below 1v at lower clockspeeds. The 0.65v idle voltage is among the lowest we’ve ever recorded for an NVIDIA card, however.
GeForce Video Card Average Clockspeeds | |||||
Game | RTX 2080S | RTX 2080 Ti | RTX 2080 | RTX 2070S | |
Max Boost Clock | 1965MHz | 1950MHz | 1900MHz | 1950MHz | |
Boost Clock | 1815MHz | 1545MHz | 1710MHz | 1770MHz | |
Tomb Raider | 1937MHz | 1725MHz | 1785MHz | 1875MHz | |
F1 2019 | 1920MHz | 1725MHz | 1785MHz | 1875MHz | |
Assassin's Creed | 1920MHz | 1800MHz | 1815MHz | 1890MHz | |
Metro Exodus | 1937MHz | 1755MHz | 1785MHz | 1875MHz | |
Strange Brigade | 1920MHz | 1695MHz | 1770MHz | 1875MHz | |
Total War: TK | 1937MHz | 1740MHz | 1785MHz | 1875MHz | |
The Division 2 | 1937MHz | 1635MHz | 1740MHz | 1845MHz | |
Grand Theft Auto V | 1937MHz | 1815MHz | 1815MHz | 1890MHz | |
Forza Horizon 4 | 1937MHz | 1815MHz | 1800MHz | 1890MHz |
Looking at clockspeeds, we can piece together a couple of interesting pieces of information. On the clockspeed side, NVIDIA hasn’t actually changed the card’s maximum clockspeed all that much. Our RTX topped out at 1900MHz, and the RTX 2080 Super is only a bit higher at 1965MHz. That they’re doing it without more voltage is a bit more interesting – it looks like chip quality may have improved a bit over the past year – but not too surprising.
What is more surprising however are the average clockspeeds we recorded for the RTX 2080 Super. In short, the card spends a lot of time at or near its top turbo bins. With temperature compensation active, our RTX 2080 Super tops out at 1937MHz; a clockspeed that it holds at for over half of our games even at 4K. Quite frankly the RTX 2080 Super is almost a boring card in this respect (in a good way); there’s just not much in the way of power throttling going on here. If anything, the hard part is getting the card above 90-95% power usage.
This, ultimately, is why the RTX 2080 Super is as fast as it is versus the vanilla RTX 2080. The extra SMs help, but it’s the extra 100-150MHz on the GPU clockspeed that’s really driving the card.
Getting to power consumption itself then, idle is effectively unchanged, exactly as we’d expect it. Load power, on the other hand, is paying the price for those 1900MHz+ clockspeeds. Under both FurMark and Tomb Raider, our RTX 2080 Super-equipped system is drawing almost the same amount of power as the RTX 2080 Ti system with a difference of just a few watts. That performance doesn’t come for free. NVIDIA’s overall power efficiency is still quite good here (the Radeon VII won’t be touching it, for example), but it’s clearly regressed a bit versus the RTX 2080 Ti and vanilla RTX 2080.
It is worth noting, however, that often the card was clockspeed-limited rather than power limited. So while Tomb Raider was specifically picked to be a punishing game – a task it delivered on here – I fully expect that the RTX 2080 Super is drawing a bit less than the RTX 2080 Ti in around half of our other games.
With higher power consumption and the same cooler comes higher temperatures. Even FurMark’s 77C is still several degrees below the card’s 84C thermal throttle point, but it is a very straightforward consequence of the increased power consumption.
Last, but not least, we have noise. Again this is the same cooler as the RTX 2080 & RTX 2080 Ti, so the card has to work a bit harder to keep itself cool versus the original RTX 2080. The net result is that the RTX 2080 Super splits the difference between the original RTX 2080 and the RTX 2080 Ti, peaking at 46.5 dB(A). This is unlikely to be a very noticeable change as compared to the RTX 2080, but it’s louder none the less. I’m actually a bit surprised it didn’t pull even with the RTX 2080 Ti, but then our RTX 2080 Ti seems to run just a bit loud period – even at idle it’s a bit louder.
111 Comments
View All Comments
Cellar Door - Tuesday, July 23, 2019 - link
The delta compression used by Nvidia is loseless.notashill - Tuesday, July 23, 2019 - link
If memory bandwidth was "the" bottleneck then the Radeon VII would be the fastest consumer level GPU on the market by an enormous margin.Samus - Tuesday, July 23, 2019 - link
Sadly I think you are right. While commendable AMD has always pushed higher memory capacities to the mainstream, their focus on memory bandwidth has never really paid off, and at a huge expense to die area for the larger memory controller, and obviously an energy efficiency deficit. This is why 3-channel memory was dropped in favor of reversion back to two channel with the Intel X58 chipset. It would be years before we would move beyond two channel again, and even then - quad channel never became mainstream.The reason is simple. Even on single channel, Intel CPU’s especially show extraordinary memory performance. The controller is well optimized and cache hit rates are high. Likewise, Nvidia using excellent compression with optimized caches makes high memory bandwidth unnecessary.
willis936 - Tuesday, July 23, 2019 - link
SISD benefits greatly from caching and ILP. SIMD doesn’t need to run ILP to keep execution units busy so it chews through memory bandwidth by comparison. There is also quick diminishing returns on GPU cache size. GPUs have 20x the memory bandwidth of CPUs for a good reason: they use it.flyingpants265 - Monday, July 29, 2019 - link
Somewhat related to the subject of compression... adaptive resolution is by far the best graphics technology I have ever seen. Render at 1800p, drop down to 1400p when below the target framerate, and upscale everything to 4k. No need to buy the highest-end graphics card anymore. If we had adaptive resolution when Far Cry 1 came out, there would have been no market for the 6800, just use a 6600.Combine with checkerboarding for console, which is impressive in its own right by NEAR-HALVING the workload. So render at half 1800p every other frame (equivalent of about 2300*1300 pixels, so 1.44x 1080p, not 4.0x) and get a generated 4k image.
notashill - Tuesday, July 23, 2019 - link
Radeon VII has double the bandwidth for the same price but it doesn't really help performance at least in games. I think there has been more focus on effectively utilizing bandwidth because making the buses wider can get really expensive.Smell This - Tuesday, July 23, 2019 - link
Hard to say . . .GDDR6 has a good deal of *theoretical* bandwidth on the table, there is the economical 'ghetto-HBM2' from Sammy, and HBM3 in the short-term.
We are likely to hear about Radeon **Navi-Instinct** pro cards this quarter, in addition to a Titan/Ampere 7nm HPC update. I'm thinking the trend will continue toward more efficient 'wider' bandwidth and advances in compression algorhtms, too.
wr3zzz - Tuesday, July 23, 2019 - link
How are these new cards draw so much more power than GTX980 under load yet have lower load temperature and noise? Are the new fans that good?Ryan Smith - Tuesday, July 23, 2019 - link
Blower versus open air (axial) cooler.Betonmischer - Tuesday, July 23, 2019 - link
Absolutely, if you compare against the reference blower that Nvidia used prior to RTX 20.