Closing Thoughts

Easily the most exciting kind of video card launch, the dawn of a new GPU architecture is a rare event that’s not to be missed. New architectures give vendors a chance to turn the playing field on its metaphorical head, defying some expectations, setting new ones, and redefining what is possible with a video card. Especially in the case of today’s launch of the Radeon RX 5700 series cards, and their RDNA architecture Navi GPUs, there’s a lot to unpack. But one way or another, this is easily going to be the most important and eventful video card launch of 2019. So let’s dig in.

For those of you who are reading this rare Sunday launch article with a cup of coffee (or are an AnandTech editor who’s been drinking it all night long), perhaps it’s best to cut to the chase and then build out from there. RDNA is an incredibly important architecture for AMD, and it sets the stage for a lot of things to come. At the same time, however, it’s also the first part in a longer-term plan for AMD, with AMD continuing to further iterate on their design over the coming years.

So how does AMD’s first example of RDNA stack up? For AMD and for consumers it’s much needed progress. To be sure, the Radeon RX 5700 series cards are not going to be Turing killers. But they are competitive in price, performance, and power consumption – the all-important trifecta that AMD has trailed NVIDIA at for too many years now.

By the numbers then, the Radeon RX 5700 XT holds an 11% performance advantage over its nearest competition, NVIDIA’s new GeForce RTX 2060 Super. Similarly, the RX 5700 (vanilla) takes a 12% advantage over the RTX 2060 (vanilla). So NVIDIA was right to shift their product stack last week in preparation for today’s AMD launch, as AMD is now delivering the performance of what was last week a $500 video card for as little as $350. That’s a major improvement in performance-per-dollar, to say the least.

Performance Summary
  Price Relative Performance Relative
Perf-Per-Dollar
RX 5700 XT vs RTX 2060 Super $399 +11% +11%
RX 5700 vs. RTX 2060 $349 +12% +12%
RX 5700 XT vs RTX 2070 Super $399/$499 -5% +19%

And, thankfully, none of this breaks the bank on power consumption either. The RX 5700 fares slightly better than its opponent, while the highly-clocked RX 5700 XT is more power-hungry in securing its performance advantage over the RTX 2060 Super. Which, with the RX 5700 within spitting distance of the RTX 2070 Super in terms of gaming performance, it gives you a good idea of what the power cost was for that last 11%. For the moment then, while AMD hasn’t significantly shifted the power/performance curve versus Turing, they also have avoided the same kind of painful performance chase that delivered toasty cards like the RX Vega 64 and RX 590.

If there is a real downside here, it’s that AMD’s blower-based coolers aren’t going to impress anyone with their performance, even by blower standards. The RX 5700 XT is a bit louder than even NVIDIA’s GTX 1080 Ti, which is a flat-out higher TDP card. To be sure, it’s well ahead of the RX Vega series here (or even the reference 390X I dug out), but AMD has yet to completely master the dark art of quiet blowers.

Tangentially, the biggest risk for AMD here is that they’ve achieved a lot of this efficiency gain by leaping ahead of NVIDIA by a generation on the manufacturing side, tapping TSMC’s 7nm process. NVIDIA will get their own chance to tap into the benefits of the new node as well, which all other elements held equal, is likely to tilt things in NVIDIA’s benefit once again. The fortunate thing for AMD, at least, is that NVIDIA doesn’t seem to be in a hurry to get there, and we’re not expecting 7nm NVIDIA consumer parts this year.

The outstanding question for gamers then is whether AMD’s performance and value advantage is enough to offset their feature deficit. With AMD’s efforts fully invested into the backend of their RDNA architecture as opposed to adding user-facing features, the RX 5700 series doesn’t bring any marquee hardware features to the table, and it doesn’t do anything to catch up to NVIDIA’s RTX cards. The end result is that the Radeon cards are faster for the price, but NVIDIA brings things like ray tracing and variable rate shading that AMD cannot.

Truthfully, there is no good answer here – at least not one that will be universally agreed upon. Variable rate shading is merely a (cool) performance optimization, but hardware accelerated ray tracing is something more. And NVIDIA has been working very, very hard to get developers to adopt it. The current crop of games arguably isn’t using it to earth-shattering effect (though Metro is coming close), but the slate for 2020 includes several high-profile games. So it comes down to a question of whether to take the higher performance now and risk the fear of missing out later, or taking ray tracing now for an unproven future?

Ultimately, I don’t think there’s a bad buy here between the RX 5700 XT and the RTX 2060 Super it competes with; both are solid cards with some unique pros and cons, and either one should make most gamers happy. As for the vanilla showdown between the RX 5700 and RTX 2060, AMD’s hand is much stronger here (or rather, NVIDIA’s is weaker), which makes for an easy decision. The RX 5700 is faster, slightly less power hungry, and it features a full 8GB of VRAM. The RTX 2060 was always a risky buy with its mere 6GB of VRAM, and now with the RX 5700 there’s really no reason good enough to consider it, even with ray tracing.

As for gamers looking for an upgrade, things are a bit more mixed. The entrance of the RX 5700 series has pushed midrange video card prices down, but not by incredible amounts. On a pure performance basis, AMD’s new cards would be very solid upgrades over the RX 500 series in terms of performance and with similar energy usage, but then they also cost nearly twice as much as the RX 500 series did at launch. The RX 5700 series is perhaps best described as a replacement for the RX Vega series and a successor to the RX 500 series; however it is not a successor to RX Vega, nor is it a proper replacement for RX 500. Instead, the new cards are a more meaningful upgrade for any GTX 970 or R9 390(X) holders who are looking for their next midrange card. In which case the RX 5700 series delivers more performance in leaps and bounds.

In the meantime, it’s a welcome sight to see a more competitive AMD in the video card market. With AMD not-so-concentrically launching a new range of excellent CPUs today, a new cycle of system builds is kicking off, which the RX 5700 series is well-positioned to capture a piece of. Ultimately then, while the Radeon RX 5700 series is not AMD’s Ryzen 3000 moment for video cards, it’s a return to form for the company and it’s great to see competition renewed within the video card space. Now to see where the rest of AMD’s journey with Navi takes them over the coming months.

Power, Temperatures, & Noise
Comments Locked

135 Comments

View All Comments

  • Zoolook13 - Friday, July 19, 2019 - link

    1080 has 7,2 Billion trans, and 1080 Ti has 11,7B IIRC, so your figures are all wrong and there is a number of features on Navi that isn't in Pascal, not to mention it's vastly superior in compute.
  • ajlueke - Wednesday, July 10, 2019 - link

    At the end of the day, the 5700 is on the identical performance per watt and performance per dollar curve as the "advanced" Turing GPUs. From that we can infer that that "advanced" Turing features really don't amount to much in terms of performance.
    Also, the AMD RDNA GPUs are substantially smaller in die area than the NVidia counterparts. More chips per wafer, and thus lower production costs. AMD likely makes more money on each sale of Navi than NVidia does on Turing GPUs.
  • CiccioB - Wednesday, July 10, 2019 - link

    So having "less features is better" is now the new AMD fanboys motto?
    The advanced Turing express its power when you use those features, not when you test the games that:
    1. Have been optimized exclusively for GCN architecture showin
    2. Use few polygons as AMD's GPUs geometry capacity is crap
    3. Turn off Turing exclusive features

    Moreover the games are going to use those new feature s as AMD is going to add them in RDNA2, so you already know these piece of junks are going to have zero value in few months.

    Despite this, the size is not everything, as 7nm wafer do not cost as 12nm ones and being small is a need more than a choice:in fact AMD is not going to produce the bigger GPUs with all the advanced features (or just a part of them that fit on a certain die size, as they are going to be fatter than these) on this PP until it comes to better costs, the same is doing Nvidia that does not need 7nm to create better and less power hungry cards.
    These GPUs are just a fill up waiting for the next real ones that have the features then next console will enjoy. And this will surely include VRS and somewhat a RT acceleration of some sort as AMD cannot be without to not be identified as being left in stone age.
    You know these piece of junk will be soon forget as they are not good for nothing but to fil the gap as Vega tried in the past, The time to create a decent architecture was not enough. They are still behind and all they are trying is to make a disturbance move just by placing the usual discounted cards with not new features exploiting TSMC 7nm granted allocation for Ryzen and EPYC.

    It does not cost AMD anything stil making a new round of zero margin cards as they have done all these years, but they gain in visibility and make pressure to Nvidia with it features rich big dies.
  • SarruKen - Tuesday, July 9, 2019 - link

    Last time I checked the turing cards were on 12nm, not 16...
  • CoachAub - Tuesday, July 9, 2019 - link

    The article states in the graphic that Navi has PCI-e 4.0 Support (2x bandwidth). Now, if this card is paired with the new X570 mobo, will this change benchmark results? I'd really like to see this card paired with a Ryzen 3000 series on an X570 mobo and tested.
  • CiccioB - Wednesday, July 10, 2019 - link

    No, it won't. Today GPUS can't even saturare 8x PCIe-3 gen bandwidth, do having more does not help at all. Non in the consumer market, at least.
  • peevee - Thursday, July 11, 2019 - link

    They absolutely do saturate everything you give them, but only for very short periods necessary to load textures etc from main memory, which is dwarfed by loading them from storage (even NVMe SSDs) first.

    BUT... the drivers might have been optimized (at compile time and/or manual ASM snippets) for AMD CPUs. And that makes significant difference.
  • CiccioB - Friday, July 12, 2019 - link

    Loading time is not the bottleneck of the GPU PCIe bandwidth, nor the critical part of its usage. Lading textures and shade code in .3 secs instead of 0.6s does not make any difference.
    You need more bandwidth only when you saturate the VRAM and the card starts using system memory.
    But being much slower than VRAM, having PCIe 2, 3 or 4 does not change much: you'll have big stuttering and frame drops.
    And in SLI/cross fire mode PCIe 3 8x is still not fully saturated. So at the end, PCIe 4 is useless for GPUs. It is a big boost for NVe disk and to increase the number of available connections using half of the PCIe 3 lines for each of them.
  • msroadkill612 - Thursday, July 25, 2019 - link

    "Today GPUS can't even saturare 8x PCIe-3 gen bandwidth, do having more does not help at all. "

    It is a sad reflection on the prevailing level of logic, that this argument has gained such ~universal credence.

    It presupposes that "todays" software is set in stone, which is absurd. ~Nothing could be less true.

    Why would a sane coder EVEN TRY to saturate 8GB/s, which pcie 2 x16 was not so long ago when many current games evolved their DNA?

    The only sane compromise has been to limit game's resource usage to mainstream gpu cache size.

    32GB/s tho, is a real heads up.

    It presents a competitive opportunity to discerningly use another tier in the gpu cache pool, 32GB/s of relatively plentiful and cheap, multi purpose system ram.

    We have historically seen a progression in gpu cache size, and coders eager to use it. 6GB is getting to the point where it doesnt cut it on modern games with high settings.
  • ajlueke - Wednesday, July 10, 2019 - link

    In the Radeon VII review, the Radeon VII produced 54.5 db in the FurMark test. This time around, the 5700 produced 54.5 db so I would expect that the Radeon VII and 5700 produce identical levels of noise.
    Except for one caveat. The RX Vega 64 is the only GPU present in both reviews. In the Radeon VII review it produced 54.8db, nearly identical to the Radeon VII. In the 5700 review, it produced 61.9 db, significantly louder than the 5700.
    So are the Radeon VII and 5700 identical in noise? Would the Radeon VII also have run at 61.9 db in this test with the 5700? Why the discrepancy with the Vega result? A 13% gain in noise in the identical test with the identical GPU makes it difficult to determine how much noise is actually being generated here.

Log in

Don't have an account? Sign up now