The AMD Radeon RX 5700 XT & RX 5700 Review: Navi Renews Competition in the Midrange Market
by Ryan Smith on July 7, 2019 12:00 PM ESTClosing Thoughts
Easily the most exciting kind of video card launch, the dawn of a new GPU architecture is a rare event that’s not to be missed. New architectures give vendors a chance to turn the playing field on its metaphorical head, defying some expectations, setting new ones, and redefining what is possible with a video card. Especially in the case of today’s launch of the Radeon RX 5700 series cards, and their RDNA architecture Navi GPUs, there’s a lot to unpack. But one way or another, this is easily going to be the most important and eventful video card launch of 2019. So let’s dig in.
For those of you who are reading this rare Sunday launch article with a cup of coffee (or are an AnandTech editor who’s been drinking it all night long), perhaps it’s best to cut to the chase and then build out from there. RDNA is an incredibly important architecture for AMD, and it sets the stage for a lot of things to come. At the same time, however, it’s also the first part in a longer-term plan for AMD, with AMD continuing to further iterate on their design over the coming years.
So how does AMD’s first example of RDNA stack up? For AMD and for consumers it’s much needed progress. To be sure, the Radeon RX 5700 series cards are not going to be Turing killers. But they are competitive in price, performance, and power consumption – the all-important trifecta that AMD has trailed NVIDIA at for too many years now.
By the numbers then, the Radeon RX 5700 XT holds an 11% performance advantage over its nearest competition, NVIDIA’s new GeForce RTX 2060 Super. Similarly, the RX 5700 (vanilla) takes a 12% advantage over the RTX 2060 (vanilla). So NVIDIA was right to shift their product stack last week in preparation for today’s AMD launch, as AMD is now delivering the performance of what was last week a $500 video card for as little as $350. That’s a major improvement in performance-per-dollar, to say the least.
Performance Summary | |||||
Price | Relative Performance | Relative Perf-Per-Dollar |
|||
RX 5700 XT vs RTX 2060 Super | $399 | +11% | +11% | ||
RX 5700 vs. RTX 2060 | $349 | +12% | +12% | ||
RX 5700 XT vs RTX 2070 Super | $399/$499 | -5% | +19% |
And, thankfully, none of this breaks the bank on power consumption either. The RX 5700 fares slightly better than its opponent, while the highly-clocked RX 5700 XT is more power-hungry in securing its performance advantage over the RTX 2060 Super. Which, with the RX 5700 within spitting distance of the RTX 2070 Super in terms of gaming performance, it gives you a good idea of what the power cost was for that last 11%. For the moment then, while AMD hasn’t significantly shifted the power/performance curve versus Turing, they also have avoided the same kind of painful performance chase that delivered toasty cards like the RX Vega 64 and RX 590.
If there is a real downside here, it’s that AMD’s blower-based coolers aren’t going to impress anyone with their performance, even by blower standards. The RX 5700 XT is a bit louder than even NVIDIA’s GTX 1080 Ti, which is a flat-out higher TDP card. To be sure, it’s well ahead of the RX Vega series here (or even the reference 390X I dug out), but AMD has yet to completely master the dark art of quiet blowers.
Tangentially, the biggest risk for AMD here is that they’ve achieved a lot of this efficiency gain by leaping ahead of NVIDIA by a generation on the manufacturing side, tapping TSMC’s 7nm process. NVIDIA will get their own chance to tap into the benefits of the new node as well, which all other elements held equal, is likely to tilt things in NVIDIA’s benefit once again. The fortunate thing for AMD, at least, is that NVIDIA doesn’t seem to be in a hurry to get there, and we’re not expecting 7nm NVIDIA consumer parts this year.
The outstanding question for gamers then is whether AMD’s performance and value advantage is enough to offset their feature deficit. With AMD’s efforts fully invested into the backend of their RDNA architecture as opposed to adding user-facing features, the RX 5700 series doesn’t bring any marquee hardware features to the table, and it doesn’t do anything to catch up to NVIDIA’s RTX cards. The end result is that the Radeon cards are faster for the price, but NVIDIA brings things like ray tracing and variable rate shading that AMD cannot.
Truthfully, there is no good answer here – at least not one that will be universally agreed upon. Variable rate shading is merely a (cool) performance optimization, but hardware accelerated ray tracing is something more. And NVIDIA has been working very, very hard to get developers to adopt it. The current crop of games arguably isn’t using it to earth-shattering effect (though Metro is coming close), but the slate for 2020 includes several high-profile games. So it comes down to a question of whether to take the higher performance now and risk the fear of missing out later, or taking ray tracing now for an unproven future?
Ultimately, I don’t think there’s a bad buy here between the RX 5700 XT and the RTX 2060 Super it competes with; both are solid cards with some unique pros and cons, and either one should make most gamers happy. As for the vanilla showdown between the RX 5700 and RTX 2060, AMD’s hand is much stronger here (or rather, NVIDIA’s is weaker), which makes for an easy decision. The RX 5700 is faster, slightly less power hungry, and it features a full 8GB of VRAM. The RTX 2060 was always a risky buy with its mere 6GB of VRAM, and now with the RX 5700 there’s really no reason good enough to consider it, even with ray tracing.
As for gamers looking for an upgrade, things are a bit more mixed. The entrance of the RX 5700 series has pushed midrange video card prices down, but not by incredible amounts. On a pure performance basis, AMD’s new cards would be very solid upgrades over the RX 500 series in terms of performance and with similar energy usage, but then they also cost nearly twice as much as the RX 500 series did at launch. The RX 5700 series is perhaps best described as a replacement for the RX Vega series and a successor to the RX 500 series; however it is not a successor to RX Vega, nor is it a proper replacement for RX 500. Instead, the new cards are a more meaningful upgrade for any GTX 970 or R9 390(X) holders who are looking for their next midrange card. In which case the RX 5700 series delivers more performance in leaps and bounds.
In the meantime, it’s a welcome sight to see a more competitive AMD in the video card market. With AMD not-so-concentrically launching a new range of excellent CPUs today, a new cycle of system builds is kicking off, which the RX 5700 series is well-positioned to capture a piece of. Ultimately then, while the Radeon RX 5700 series is not AMD’s Ryzen 3000 moment for video cards, it’s a return to form for the company and it’s great to see competition renewed within the video card space. Now to see where the rest of AMD’s journey with Navi takes them over the coming months.
135 Comments
View All Comments
eastcoast_pete - Tuesday, July 9, 2019 - link
Actually, I like the idea used in the graph you linked to: $ per fps, averaged from 18 games, all at 1080p very high settings. It allows a value comparison all the way from lower to high-end cards.Meteor2 - Monday, July 8, 2019 - link
C'mon jjj you're better than that.sgkean - Monday, July 8, 2019 - link
How does enabling the various advanced features (Ray Tracing, AMD Fidelity FX, AMD Image Sharpening) affect the game scores? With the performance being so close, and these new features/technologies being the main difference, would be nice to see what effect they have on performance.Wardrop - Monday, July 8, 2019 - link
I assume the noise of these is such due to the use of a blower? I'm guessing we'll have to wait for custom PCB's and coolers to get something quieter, or otherwise got with water cooling.xrror - Tuesday, July 9, 2019 - link
Argh... yet again, it seems like AMD is pushing beyond the sweet spot of the process node to try and force as much raw performance out as they can.I really don't want to be yet another person bashing on Raja. He probably did get a bit of "short changed" on personnel resources at AMD as Ryzen really DID need to succeed else AMD dies. And he did deliver on giving good GPU compute GPU cores for the higher margin workstation markets.
But... it just feels like AMD needs to get to terms with their fabrication node and how to get GPU cores to "kickith the butt" beyond beating Intel IGP graphics.
Which... feels unfair in a way. The only reason AMD "sucks" is that nVidia right now is so stupid dominant in discrete graphics (and major kudo's to nVidia for mastering that on an "older node" even). I mean even Intel had really bad problems porting it's IGP graphics to 10nm Cannon Lake.
But that all said, RX 5700 really feels like it's fighting against the process node to not suck. Intel may (hopefully, might) actually get it's s**t together and bring forth a competitive descrete card (and if they "fail" guess what, that fail will hammer the lower end market) and nVidia...
well like, nVidia even -2 process nodes behind at this rate would probably still be faster. Which is stupid. All credit to nVidia, it's just I really hoped for a few more process "rabbits out of the hat" before GPU's slammed into the silicon stagnation wall.
I just wish we could have gotten maybe a doubling of graphics performance for VR before "market forces" determined that a VR/4K capable video setup is going to cost you over $1000.
Meteor2 - Tuesday, July 9, 2019 - link
"RX 5700 really feels like it's fighting against the process node to not suck." -- what are you talking about?peevee - Thursday, July 11, 2019 - link
Actually, for GPUs with their practically linear scaling of performance from ALUs, using the densest nodes is the right approach. They probably should have used denser, low-power variant (libraries) of TSMC's "7nm" process and add more ALUs in the same space at the expense of frequency, but that would be different from what Ryzen 3, so add the extra expense to R&D.CiccioB - Tuesday, July 9, 2019 - link
In few words, AMD just used a lot of transistors and W just to get near Pascal efficiency.Thanks to the new 7nm PP they manage to create something that looks like acceptable.
But as we already saw in the past, they somewhat filled the gap only because Nvidia is still waiting for the new PP to become cheaper.
Once it will, Nvidia new architecture is going to leave these useless piece of engineering in the dust. Be it just a Turing shrink with no other enhancements.
10 Billions transistors to improve IPC of about 1.25x and spare just few W thanks to the 7nm PP. And be on par to Pascal at the end. 10 Billions transistors without the support of a single advanced feature that Turing has, such has VRS that is going to improve performances a lot in future games and is going to be the real trump card for Nvidia against this late Pascal, no mesh shading or similar, no FP+INT, no RT and no tensors that can be used for many things included advanced AI.
10 billions transistors that simply have given evidence that GCN is problematic and really needs a lot of workarounds to perform well. 4.4 millions transistors used to improve GCN efficiency. And that resulted in a mere 1.25x.
10 billions transistors spent on fixing a crap architecture that would not be enough to make it look good but, again, if the frequency/W curve would not have been ignored completely making this chip consume the same as the rival which is on a older PP. Like for all the previous failing architectures starting from Tahiti.
In the end this architecture is a try to fix an un-fixable GCN and relies only on the delay that Nvidia has in the 7nm adoption. On the same node it would have been considered the same as Polaris or Vega, big, hot, worthless to the point to be sold with no margins.
As we can see this is equal in being a waste of transistors and W and has been discounted even before launched. Worthless piece of engineering that will be "steamrolled" by next Nvidia architecture that will pose the basic path for all the next graphics evolution while already extending what is already available today thought Turing.
AMD has still to put all those missing features, and it already has a really big transistor budget to handle today. 7nm, though by some revision, are here to stay for long time. If AMD is not going to change RDNA completely they won't be able to compete but by skipping the support of the more advanced features in the next years and are going to enjoy this match of the performances for just few months. Of course the missing features will be considered useless until they will eventually catch up. And they have still the console weapon to help them keep the market to a stall as they are quite behind with what the market can provide in the next years. RT is just the point of the iceberg. But also advanced geometry like mesh shading features that could already boos the scene complexity to the moon. But we just learnt that with NAVI AMD just managed to match Maxwell geometry capacity. Worthless piece of silicon, already discounted before launch.
Meteor2 - Tuesday, July 9, 2019 - link
"In few words, AMD just used a lot of transistors and W just to get near Pascal efficiency." -- that makes no sense at all.Didn't bother reading the rest of your comment, sorry not sorry.
CiccioB - Wednesday, July 10, 2019 - link
I just wonder what you have seen.NAVI gets the same perf/W that Pascal has and the same exact features.
No RT, no tensor, No VSR, no geometry shading, no Voxel acceleration (that was already in Maxwell), no doble projection (for VR).
7nm and 10 billions transistor to be just a bit faster than a 1080 that is based on a 5.7 billion transistor chip. And using more power do to so.
Don't bother reading. It is clear you can't understand what's written.