AMD's RDNA 2 Gets A Codename: “Navi 2X” Comes This Year With 50% Improved Perf-Per-Watt
by Ryan Smith on March 5, 2020 5:45 PM ESTWhile AMD’s Financial Analyst Day is first and foremost focused on the company’s financial performance – it’s right there in the title – this doesn’t stop the company from dropping a nugget or two of technical information along the way, to help excite investors on the future of the company.
One such nugget this year involves AMD’s forthcoming RDNA 2 family of client GPUs. The successor to the current RDNA (1) “Navi” family, RDNA 2 has been on AMD’s roadmap since last year. And it’s been previously revealed that, among other things, it will be the GPU architecture used in Microsoft’s forthcoming Xbox Series X gaming console. And while we’re still some time off from a full architecture reveal from AMD, the company is offering just a few more details on the architecture.
First and foremost, RDNA 2 is when AMD will fill out the rest of its consumer product stack, with their eye firmly on (finally) addressing the high-end, extreme performance segment of the market. The extreme high end of the market is small in volume, but it’s impossible to overstate how important it is to be seen there – to be seen as competing with the best of the best from other GPU vendors. While AMD isn’t talking about specific SKUs or performance metrics at this time, RDNA 2 will include GPUs that address this portion of the market, with AMD aiming for the performance necessary to deliver “uncompromising” 4K gaming.
But don’t call it "Big Navi". RDNA 2 isn’t just a series of bigger-than-RDNA (1) chips. The GPUs, which will be the codename “Navi 2X” family, also incorporate new graphics features that set them apart from earlier products. AMD isn’t being exhaustive here – and indeed they’re largely already confirming what we know from the Xbox Series X announcement – but hardware ray tracing as well as variable rate shading are on tap for RDNA 2. This stands to be important for AMD at multiple levels, not the least of which is closing the current feature gap with arch-rival NVIDIA.
And AMD didn’t stop there, either. Even to my own surprise, AMD isn’t just doing RDNA (1) with more features; RDNA 2 will also deliver on perf-per-watt improvements. All told, AMD is aiming for a 50% increase in perf-per-watt over RDNA (1), which is on par with the improvements that RDNA (1) delivered last year. Again speaking at a high level, these efficiency improvements will come from several areas, including microarchitectural enhancements (AMD even lists improved IPC here), as well as optimizations to physical routing and unspecified logic enhancements to “reduce complexity and switching power.”
Process nodes will also play some kind of a role here. While AMD is still going to be on a 7nm process here – and they are distancing themselves from saying that they'll be using TSMC’s EUV-based “N7+” node – the company has clarified that they will be using an enhanced version of 7nm. To what extent those enhancements are we aren’t sure (possibly using TSMC’s N7P?), but AMD won’t be standing still on process tech.
This strong focus on perf-per-watt, in turn, will be a key component of how AMD can launch itself back into being a fully viable, top-to-bottom competitor with NVIDIA. While AMD is already generally at parity with NVIDIA here, part of that advantage comes from an atypical advantage in manufacturing nodes that AMD can’t rely on keeping. NVIDIA isn’t standing still for 2020, and neither can AMD. Improving power efficiency for RDNA 2 (and beyond) will be essential for convincingly beating NVIDIA.
Overall, AMD has significant ambitions with RDNA 2, and it shows. The architecture will be the cornerstone of a generation of consoles, and it will be AMD’s first real shot in the last few years at taking back the flagship video card performance crown. So we’re eagerly awaiting to see what else RDNA 2 will bring to the table, and when this year the first video cards based on the new architecture will begin shipping.
46 Comments
View All Comments
watzupken - Friday, March 6, 2020 - link
Top end cards are usually toasty. On paper, it looks really good. But again, AMD is still playing catch up with Nvidia. With RDNA2, it seems they are catching up (on paper) with Turing. But we do have to remember we are comparing 7nm chip from AMD with a 14nm class GPU from Nvidia. So the true competition will only heat up later this year.nevcairiel - Friday, March 6, 2020 - link
Since RDNA2 is only coming out in late 2020, it will directly compete against RTX 3000 cards, which will be out by then. In short, even if these numbers translate to the high-end quite well, the crown seems to remain out of reach.Fataliity - Friday, March 6, 2020 - link
RDNA 2 will be matched with Ampere. RDNA already competes with Turing, it just lacks ray-tracing. (5700XT is only 5% slower than 2070Super. At much lower price).Cellar Door - Friday, March 6, 2020 - link
50% perf per watt increase, allows for 2080Ti(+20%) perf - if AMD sticks to a reasonable power envelope(TDP).Kangal - Friday, March 6, 2020 - link
I don't think it scales linearly.So there will be good 35%-45% improvement on the low-power side, then a great 45%-55% improvement in the midrange segment, then a decent 30%-40% improvement on the high-power end.
Overall, I think RDNA2 will slightly surpass Turing on a "IPC" comparison, but it will be doing so whilst using 7nm versus their 10nm wafers. If Nvidia were to port Turing directly to 7nm, they would have a noticeable advantage, at least when thinking on a "IPC" style format. Yet, we know Nvidia will make tweaks and get even further gains. So I think the RTX-30 series is going to enjoy a healthy lead in the market, just like Nvidia did in their GTX-10 series period.
I guess the good news is that progress is happening. But it's going to become even more expensive to get into PC Gaming, whereas the consoles might actually be decently competitive (ie Recall the launch of the PS2 or the Xbox 360).
michael2k - Friday, March 6, 2020 - link
Turing is 12nm. Moving to 10nm or 7nm would give them parity or an advantage over RDNA2 by dropping power, boosting clocks, or allowing for more compute units.Ampere is purported to also be a 50% improvement over Turing thanks to 7nm, allowing them to boost clock, cores, and lower power.
Spunjji - Friday, March 13, 2020 - link
People keep doing funny maths here. Here's how I see it:In IPC terms, RDNA on 7nm is already at relative parity with Turing on 12nm. In performance per watt terms, when you factor in that node difference it's an obvious loser.
Ampere is supposed to gain by at least 50% over Turing - that's including the 7nm shrink.
RDNA 2 is alleged to have a similar improvement, without any significant shrink. They're also going to finally be releasing products based on larger dies.
The end result should be real competition at the high end again, albeit likely with Nvidia still holding the performance crown at the bleeding edge. I don't care about £1000+ cards, so this sounds like the proper competition I've been waiting for since Maxwell overturned the apple cart.
Nozuka - Friday, March 6, 2020 - link
Will be interesting to see, if the close relationship between the console and PC GPUs will give AMD an advantage with the optimization of games for their platform.CiccioB - Friday, March 6, 2020 - link
AMD already enjoys optimization for its own architectures with respect to Nvidia.In fact you can't see any new Nvidia feature supported by the developer, and I'm speaking about VSR, mesh shading, packet math, improved geometry handling, voxel accelerations.
We are stuck to AC tricks to improve GCN throughput with low polygons count, pumped up textures (as AMD enjoys bigger memory bandwidth) and nothing more.
Hey, but those boring improvements work better on GCN than on Nvidia HW.. WOW, viva la revolution... viva el progeso!
Fataliity - Friday, March 6, 2020 - link
Nvidia co-designs many many games with developers. So they are indeed optimized. Nvidia has software engineers that personally help developers optimize for the game. And I'm sure all the implementations of "Variable shading" will have a similar API for developers to target, and the drivers for each card (Intel, Nvidia, AMD) will do the corresponding stuff.