As part of a jam-packed day of AMD product news, moments ago AMD’s CEO, Dr. Lisa Su got off the stage, wrapping up her suite of announcements. The highlight of which is AMD’s new family of video cards, the Radeon RX 5700 series. AMD first teased these back at the tail-end of Computex a few weeks ago, and while the cards won’t actually launch until July, AMD has opened the floodgates on information about these cards – pricing, expected performance, architecture – so let’s get to it.

The Radeon RX 5700 series – which I’ll call the 5700 series for short – are AMD’s new family of mid-to-high end video cards. Within AMD’s product stack these cards essentially replace AMD’s previous RX Vega 64/56 parts, offering similar-to-better performance at lower prices, lower power consumption, and with newer features. To be clear, these are not flagship-level video cards, and at some point in time Vega 64/56 will get true successors in the form of faster, more powerful high-end video cards. But within AMD’s product stack and in the broader market, this is where the new cards will land.

These new cards from AMD are part of their first wave of cards based on their new RNDA architecture family. We’ll get into (excruciating) detail about that at a later time, but at a high level RDNA makes some pretty radical shifts in how AMD’s underlying GPU architecture works, more than earning the new name and realigning our performance expectations for AMD video cards. RDNA ultimately seeks to boost both AMD’s workload efficiency – that is, getting more work done with the same resources – as well as their power efficiency, in order to improve their competitiveness in the PC video card market. Pioneered in the Navi family of GPUs, the RDNA architecture will be the basis of AMD products for a long time to come; and not just PC GPUs, but consoles (Xbox and Playstation), mobile, and whatever other deals AMD can land.

But getting back to the matter at hand, AMD is launching two 5700 series cards here next month. At the high end we have the fully enabled Radeon RX 5700 XT (yes, those insufferable suffixes are back), which sports 40 CUs and a peak clockspeed of over 1900MHz. It’s partner in crime will be the suffix-free Radeon RX 5700, which is the traditional second-tier part that cuts back on some functional units and performance in the name of offering a lower-priced card (and letting AMD salvage Navi chips). These parts, AMD tells us, will be competitive with the GeForce RTX 2070 and RTX 2060 respectively, though of course this is something we will determine for ourselves once we have them in for testing.

AMD Radeon RX Series Specification Comparison
  AMD Radeon RX 5700 XT AMD Radeon RX 5700 AMD Radeon RX 590 AMD Radeon RX 570
Stream Processors 2560
(40 CUs)
2304
(36 CUs)
2304
(36 CUs)
2048
(32 CUs)
Texture Units 160 144 144 128
ROPs 64 64 32 32
Base Clock 1605MHz 1465MHz 1469MHz 1168MHz
Game Clock 1755MHz 1625MHz N/A N/A
Boost Clock 1905MHz 1725MHz 1545MHz 1244MHz
Throughput (FP32) 9.75 TFLOPs 7.9 TFLOPs 7.1 TFLOPs 5.1 TFLOPs
Memory Clock 14 Gbps GDDR6 14 Gbps GDDR6 8 Gbps GDDR5 7 Gbps GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 256-bit
VRAM 8GB 8GB 8GB 4GB
Transistor Count 10.3B 10.3B 5.7B 5.7B
Typical Board Power 225W 180W 225W 150W
Manufacturing Process TSMC 7nm TSMC 7nm GloFo/Samsung 12nm GloFo 14nm
Architecture RDNA (1) RDNA (1) GCN 4 GCN 4
GPU Navi 10 Navi 10 Polaris 30 Polaris 10
Launch Date 07/07/2019 07/07/2019 11/15/2018 08/04/2016
Launch Price $449 $379 $279 $179

To start things off, as always we have the specs. It’s best to be clear now that while the raw specifications are helpful in understanding the basics of these cards, due to the RDNA architectural transition the numbers can be deceiving. In particular, RDNA incorporates a number of changes to improve both compute efficiency/utilization and reduce memory bandwidth needs. So if anything, these specifications understate the 5700 series cards by a decent degree, as in practice they’re going to be more efficient per clock than their Polaris predecessors.

And I’m using AMD’s Polaris cards here as my point of comparison – despite the fact that these new cards will perform more like Vega 64/56 – because at a hardware level Navi 10 replaces Polaris 10 as a mid-range(ish) GPU. AMD’s first 7nm GPU for this market, which is being fabbed over at TSMC, measures in at 251mm2, pacing in 10.3 billion transistors into that modestly-sized die. This is a bit larger than the 232mm2 Polaris 10, and incorporates 80% more transistors. So there’s a whole lot more hardware at work here, which for AMD should translate into a good deal more performance.

Diving into the 5700 XT, AMD’s full-fledged Navi card will attempt to put its best foot forward in terms of securing a new spot for AMD in the video card market, and in showing off the RDNA architecture. This is a 40 CU part, with clockspeeds peaking at 1905MHz. New to the Navi generation is a figure AMD is calling their “game clock”, which is analogous to NVIDIA’s turbo clock, and is a conservative estimate of what the average GPU clockspeed is during normal games. To be sure, AMD’s clocking behavior hasn’t really changed – they still try to boost as high as they can, as much as power and thermals allow – but this value is intended to offer better guidance to buyers about what the hardware will typically do.

Looking at these clockspeed values then, in terms of raw throughput the new card is expected to get between 9 TFLOPs and 9.75 TFLOPs of FP32 compute/shading throughput. This is a decent jump over the Polaris cards, but on the surface it doesn’t look like a huge, generational jump, and this is where AMD’s RDNA architecture comes in. AMD has made numerous optimizations to improve their GPU utilization – that is, how well they put those FLOPs to good use – so a teraflop on a 5700 card means more than it does on preceding AMD cards. Overall, AMD says that they’re getting around 25% more work done per clock on the whole in gaming workloads. So raw specs can be deceiving.

Meanwhile on the front and backends respectively of the new GPU, 5700 XT can spit out 4 rendered polygons per clock, and more still when primitive shading is employed. In order to consume all of the pixels that will come flowing out of that process, the GPU ships with 64 ROPs on the backend, twice as many as on AMD’s Polaris cards (or the same number as the Vega cards. We’ll get into architectural matters later, but AMD has put some work in here in order to improve the throughput of these blocks, so the 5700 XT on paper looks like a reasonably well-balanced architecture.

Feeding the beast that is AMD’s Navi 10 GPU are the company’s new GDDR6 memory controllers. While AMD as a company has their arms deep in the development of GDDR memory, for product cadence reasons they are becoming the second company to employ the new memory type,  behind NVIDIA. So as we’ve seen in other products, GDDR6 stands to significantly improve the amount of memory bandwidth AMD has to play with; going from 256GB/sec on comparable Polaris cards to 448GB/sec on these new Navi cards. Compounding this, AMD is now employing delta color compression throughout virtually their entire chip, so memory bandwidth efficiency as a whole is improving. Along these lines, the new GPU employs a new cache heiarchy. The nuts and bolts of this we’ll save for another time, but it ultimately keeps traffic local to the GPU and better avoids using expensive off-die GDDR6 bandwidth when it can be avoided. Overall between its 64 ROPs and significant compute throughput, 5700 XT can eat a lot of bandwidth, and AMD intends to be well-prepared to feed it.

Finally, let’s talk about power consumption. For this generation AMD is sticking with their Board Power figures, which means they’re largely comparable to past AMD cards. In the case of the 5700XT, this is a 225 Watt card, similar to the RX 590 and (on paper) a bit more than the Vega 56. The card draws its power from a combination of the PCIe slot and external power connectors, relying on an 8-pin + 6-pin configuration there. At 225W it’s definitely not a lightweight card when it comes to power consumption, and in our full review we’ll have to see how this translates to real-world performance, and if AMD has tuned this card more towards performance than power efficiency.

Cooling this card will be a largely traditional AMD blower. AMD is employing an aluminum shroud and backplate here (similar to the Vega 64), with a vapor chamber drawing heat up from the GPU to the heatsink. AMD tells us that the blower itself has been further optimized for air flow and noise, and the company has set the default acoustic limit to a relatively low 43dB. Ultimately AMD is trying to strike a new balance between the benefits of the blower design and noise; blowers work far more consistently, which AMD considers desirable for their wide range of customers, but by tuning the blower and capping the noise a bit lower, they’re trying to keep it from being quite so audible in quiet environments.

Radeon RX 5700

Not to be entirely overshadowed, below the Radeon RX 5700 XT we have the vanilla Radeon RX 5700. This card is largely cut from the same cloth as its faster XT sibling, trading off some performance for lower power consumption and lower pricing.

The lesser of the 5700 cards ships with 36 CUs enabled, and more modest clocks. The average game clock is rated for 1625MHz, with a maximum boost of 1725MHz, meaning that for compute/shader/texture workloads it should deliver around 87% of the 5700 XT’s performance. Meanwhile for ROP and geometry throughput, we’re looking at 93% of that performance.

Meanwhile nothing changes for the 5700 relative to the XT card when it comes to memory. It gets the same 8GB of 14Gbps GDDR6. So the 5700 should be even better fed than its full-fledged counterpart, relatively speaking.

The overall drop in performance also means power consumption has come down. The card is rated for a board power of 180W, which is comparable to what the RX 580 was rated for. For anyone crunching the numbers at home, the 45W drop in rated power consumption is even greater than the rated drop in performance, so it’s likely that the 5700 will end up being the more power efficient of the two cards.

Product Positioning & the Competition

Last but certainly not least of course is the details of next month’s launch, and how AMD’s new cards stack up to the competition – both AMD and NVIDIA.

Like the rest of AMD’s new 7nm consumer hardware, the two Radeon cards will be launching on July 7th. The 5700 XT will hit the shelves at $449. Meanwhile it’s smaller sibling, the 5700, will be a $379 card.

Next month’s launch is a traditional, driven-from-the-top full reference card launch. This means AMD’s blower-style reference cards will be what you find on the shelves on the first day. Semi-custom and custom cards are of course in the works, but those will come at a later time. Meanwhile, unlike NVIDIA, AMD isn’t doing a Founders Edition program here, so once custom cards do come out, barring any market changes they should be priced the same as AMD’s reference cards.

Interestingly, even though this is a new GPU family on a new architecture (on a new process), AMD is doing a game bundle of sorts. Bundled with the Radeon RX 5700 series cards is a 3 month subscription for Microsoft’s new Xbox Game Pass program. The company’s new all-you-can-eat game subscription service, despite the name it applies to PC games as well. I would be lying if I said I wasn’t a bit off-put by the idea of including what amounts to a trial subscription as a bundle – as opposed to games you own – but it’s certainly different. And I suspect there was some wheeling & dealing by Microsoft to promote the new service.

Looking at the AMD product stack then, the new Radeon 5700 cards are an interesting addition to AMD’s lineup. They will remain below the Radeon VII as AMD’s fastest card – the Vega derivative is still a tier above – but they are supplanting the Vega 64/56 and then some. According to AMD’s own data, the 5700 XT is on average 14% faster than the Vega 64, and closer to 30% faster than the Vega 56. AMD doesn’t provide similar data for the 5700 (vanilla), but with those numbers the card should easily best the Vega 56.

AMD has been drawing down Vega card inventory for a while now, so if you look at the few cards left on the market, you’ll find that they’re a fair bit cheaper than the new Navi cards. So for the moment they are options as cheaper alternatives, but as is usually the case here, this isn’t a situation that will last (and won’t come with any of Navi’s benefits, obviously). Still, I’m curious to see just how close (or far apart) the two families really end up.

Instead the big competitive question is going to be how all of this compares to NVIDIA’s current-generation GeForce RTX 20 series cards. NVIDIA kicked off that launch almost a year ago, and the unimpressively priced cards have been ruling the roost for a while now.

According to the slides AMD has provided, the $449 5700 XT should beat the $499 RTX 2070 by a few percent. That said, vendor benchmarks must always be taken with a sizable grain of salt, as vendors like to put their products in a good light. Credit to both AMD and NVIDIA here, they’ve actually been pretty decent as of late, so I suspect AMD’s numbers are close to what we’ll find with the hardware. In which case I’m expecting an earnest 2070 competitor, though we’ll see if the 5700 XT can consistently beat it.

A Quick Note on Architecture & Features
Comments Locked

326 Comments

View All Comments

  • Korguz - Tuesday, June 11, 2019 - link

    SaberKOG91, yep.. that he is ...
  • Phynaz - Tuesday, June 11, 2019 - link

    When did Nvidia do that to mid range? You mean the 2xxx cards that have a ton more features than AMDs cards? Guess what, features cost money to implement.

    What AMD gave their fans today was a $50 price reduction and a power usage increase over the gtx 1080.

    Wait for Polaris.
    Wait for Vega
    Wait for Navi

    The correct answer has always been buy Nvidia now and enjoy!
  • SaberKOG91 - Tuesday, June 11, 2019 - link

    You mean the consumer cards that Nvidia designed with datacenter features and sold them to you by inventing ways of using them that no one cares about? DLSS only exists so that tensor cores aren't worthless to games. RTX only exists to sell high-margin Quadro cards. The 20 series barely improves on the 10 series for all of the other features of the card. It'll be years before any of the extended features of the RTX cards are actually made mainstream in games. If Nvidia cared about performance, they would have just scaled up the silicon used in the 16XX cards and gotten a huge boost in gaming performance. Instead they stuck a bunch RTX and Tensor Cores onto the die and sold you a workstation card that you can't even take advantage of. So all your old games are barely better and none of your new games can use the new features for a year after launch. And rather than keep up with inflation, they jack the price up a few hundred dollars over last generation and tell you it's all worth it.

    You're just too stupid to see how badly they screwed you over.
  • CiccioB - Tuesday, June 11, 2019 - link

    You are saying that introducing new feature to make the market advance over the now old classic rasterization rendering method, and the costs associated with this, is a wrong thing and the best strategy would have been packing more and more transistors to make the same old things just faster?

    If this Navi has finally that geometric boost we will finally see games with more polygons. Finally. since Kepler nvidia could support more than twice the number of polygons GCN has been able to and with the mesh shading in Turing they can now support more than 10x. But we are stuck with a little more than Wii model complexity due to GCN and game engine/assets developed for consoles.

    We are way back of what we could be just beacure GCN can't keep up with all the new functionalities and technology nvidia has introduced these years.
  • Spunjji - Tuesday, June 11, 2019 - link

    Pro tip, CiccioB - If you're starting a comment with something like "So you're saying", you're clearly signalling to anyone paying attention that you either:
    1) Didn't understand what the person was saying, or
    2) Are trying to deliberately misrepresent what the person was saying

    In this case, you're inferring that the poster said creating new features is bad while assuming that the cost Nvidia have attached to those features is necessary or inevitable.

    The truth is that while Ray Tracing will be a great addition to gaming when we have cards that can support it at reasonable performance levels, only the 2080Ti really makes the grade. That card costs significantly more than I paid for my entire gaming laptop. That's *not* a good value proposition by any stretch of the imagination.

    Nvidia could have introduced RTX at the ultra-high-end this generation and moved it downwards on the next - at least then we'd have still have some good value from their "mid-range" cards. Instead they pushed those features down to a level where they don't make any sense and used that to justify deflating the perf/$ ratio of their products.

    Saber's argument is pretty sound - these features only really make sense for AI right now, but Nvidia made a bet that they could get their gamer fans to subsidize that product development for them. It's good business sense and I don't begrudge them doing it, I just begrudge people for buying into it as if it's somehow The Only Right Thing To Do.
  • CiccioB - Tuesday, June 11, 2019 - link

    I do not understand your "intro" to your worthless comment.
    There have been statement saying that creating new features is not good because it requires lot of time for them to be adopted, so better wasting transistor to accelerate what we have now (and had since someone else introduced new features).

    Your statement here is worthless (and clearly expressed under red fanboysm):
    Nvidia could have introduced RTX at the ultra-high-end this generation and moved it downwards on the next [..]. Instead they pushed those features down to a level where they don't make any sense and used that to justify deflating the perf/$ ratio of their products.

    They started with this generation with big dies to include those new features at the level they could and you already said that they did to justify something you just hate without waiting for the next generation when the shrinking may enable those feature to be scaled down to the low-mid level of the market.
    You have said something that is possible to be achieved in a generation evolution of the architecture together with a die shrink, but your fanboysm in defense of something that AMD could not achieve in 3 years (since the launch of Polaris), just make you state that nvidia is bad because they haven't brought RTX to mainstream in 6 months.

    If RTX is going to take a couple of yyears instead of an entire console cycle to be adopted is also because nvidia wanted to sell expensive cards with those features.
    You are not obliged to buy them, you can just continue buy crappy GPUs on obsolete architecture that consume twice the power to get the same work (but not the new features) done if that make you happy.

    It's a free market and accusing a company to sell a more expensive product in the attempt to bring the market ahead (and not grinding it to an halt as AMD has done since it introduced GCN) it's clearly stupid and just denotes that you are just angry by the fact that AMD even with 7nm and 3 years of development didn't managed to get where nvidia is both in terms of features (which is not only RTX) and in efficiency.
    Because yes, those fatty die feature rich GPUs by nvidia can do more work (even without using the new features) with the same W that the new AMD's GPUs at 7nm can.

    The reality is this one.
    AMD with a PP of advantage can't keep up with nvidia efficiency and feature list and this is the real reason we have high prices. because 7nm is not cheap, and having shrunk GCN to get those Navi performance is another flop that is going to be payed when nvidia will on its turn shrink Turing to a new performance (and feature rich) levels.
  • Korguz - Tuesday, June 11, 2019 - link

    CiccioB
    your " statement is worthless " comment.. is also worthless.. as Spunjji is correct.. nvidia COULD of kept RTX to the ulta highend, say titan and 2080/ti, and then made a card for the 2070/2060 that did increase performance for every one else over the 10 series.. but, they didnt.. instead.. they want every one to pay for the ray tracing development.

    " brought RTX to mainstream in 6 months " RTX is NOT mainstream, far from it.. the cards are priced so only those with more money then brains, can buy them. which i assume.. is you Phynaz, due to your constant defending of RTX, and you just need something to justify the price you paid WAY to much for to get an RTX card ...

    " this is the real reason we have high prices." WRONG, nvidia put the prices where they are, cause over the last few years.. they keep charging more and more for their cards, when they didnt need to.. but all they were worried about.. was their PROFITS !! look at the comments by nvidia for their earnings call between 2018 and 2019, now that the crypto mining craze is dead... that alone shows nvidia is only worried about profits..
  • CiccioB - Wednesday, June 12, 2019 - link

    You have a convoluted mind, surely due to the fact that you are a red fanboy that cannot see the facts.
    1. There's really no reason at all to introduce a feature like raytracing only on high end card that are going to be maybe 5% of the market when it needs a big enough user base to be supported. It would have been only a way to see "hey, we are there, so AMD, think about it as well and catchup with us the next generation".
    2. nvidia has not put a gun on your head to "make you all pay for the raytracing development".
    You are free to buy whatever other card without RTX and stay in the cheap budget you have.
    3. New features have a cost, and it may shock you, but they have to be payed somehow buy the one that buy those GPUs. But you are a red fanboy and you are used to cheap crappy architectures which have not brought a single advancement over the last 10 years, so yes, you may be horrified by the idea that technological advancement have a cost that have to be repaid.
    4. At the end of you r worthless rant you have AMD launching a new generation which is a PP ahead that still can't reach competition efficiency and most important is pricing it at the same level of the competition with not of a single new feature introduced by it (despite the packet math). So now you have to by expensive crap with no advanced feature to have the same performance of classic game engines but still using more W (or if you want to play with voltages and clocks with the same power but a PP of advantage.. yes, that's the advancement we all were waiting for!).
    But don't stress. You can still buy the cheap power hungry Polaris crap with not new advanced features that AMD is selling at discount since the launch of the GTX680.
    That is going to help AMD to improve its balance and have more money to invest for the next generation. So that next generation when AMD chip will get still fatter for RT support yuou can still buy cheap GPUs and not pay for the new features and again help AMD to reach generations later the features introduced by the competition years before.
  • Korguz - Wednesday, June 12, 2019 - link

    and you dont ?? face.. for the most part.. nvida priced your coveted new feature out of the hands of most people, and even you must admit, that ray tracing on anything but a 2080, is almost useless cause of the performance hit.

    1: see above
    2: stay in the cheap budget ?? um sorry, but maybe you are still living at home, with next to no bills to pay, but some of us, have better things to spend our money on, like a mortgage, kids, food, etc... none of the people i know.. have RTX cards, and its because they cant justify the high prices your beloved nvidia is charging for them...
    3. i am a red fanboy ?? tell that to the 4 1060s i own, and the 3 readeon cards i also own, all in working comps.
    4. at least amd has priced it A LOT more affordable, that more could afford to buy, with out the main, for the time being, useless main feature that you cant really take advantage of...
    but i will guess.. you are an nvidia fan boy, who loves to pay for their over priced cards that have made the last few years... who lives at home, and there fore, has more money then brains....
  • CiccioB - Friday, June 14, 2019 - link

    You are a clueless AMD fanboy despite having some nvidia cards.
    I'm not for nvidia at all costs and there's not doubts that Turing cards are expensive.
    But you are just prompting the usual mantra "AMD is better because it has lower prices".
    The reality is that is has lower prices because it has worse products.
    In fact, now that they believe (and we'll see if that's true) they are rising the prices.

    The fact that Vega (and Polaris too) is sold at a discount price so that at the end of the quarter AMD has to cover its losses with the money coming from Ryzen is not a good thing even though it is good for your low budged pocket.
    It just is a sign that the products are so bad that they need very low prices to match they very low value. It's a simple marketing law that AMD fanboy constantly forget. Actaully, it is easy to recognize an AMD fanboy (or an ignorant, which is the same>) as they constantly use dumb reasons to justify their preferred company without knowing what are the real effects of the strategy that AMD is using.

    On Turing, the high prices are due to the large dies. You are not forced to buy those large dies and be happy with your obsoleted cheap technology. You think that ray tracing won't be useful until next or 2 generations. If we were waiting for AMD we would not have it in 10 years as they have not been able to bring a single technological advancement in 13 years (that's the launch of Terascale architecture by ATI).
    They just follow like a dogs does with its prey It is easy not to have to invest in new things and just discount products to make them appear economical better than they technologically actually are.
    Big dies, more W, low price to stay in par with lower tier products made by the competition.

    You may use all the red glasses you want to look at how the things stand with this Navi: but the reality is summed in 2 simple things:
    1. in 2019 with a completely new PP they matched Pascal perfrormance/W
    2. as soon as nvidia shrinks Turing they'll return in the dust as they deserve not having presented one new features on what it actually is a redesign of an obsolete architecture that should have dies in 2012 instead of being sold at discount for all these years making kids like you believing that low price = better products and never looking at the fact that it is an hole in fiscal quarters.

    And then you fanboy constantly speak about lack of money to do this and that. It's all about the same cause: bad products = low price = low margins = no money.

    They know about this and they are trying to make money before nvidia make its shrinks (which will be done when the new PP is cheaper because nvidia wants money not your charity) and Intel comes out with 10nm solutions (which is a bit further in time but they come and they will regain the market as they have before).

Log in

Don't have an account? Sign up now