AMD Announces Radeon RX 5700 XT & RX 5700: The Next Gen of AMD Video Cards Starts on July 7th At $449/$379
by Ryan Smith on June 10, 2019 7:20 PM ESTA Quick Note on Architecture & Features
With pages upon pages of architectural documents still to get through in only a few hours, for today’s launch news I’m not going to have the time to go in depth on new features or the architecture. So I want to very briefly hit the high points on what the major features are, and also provide some answers to what are likely to be some common questions.
Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.
In terms of compute, there are not any notable feature changes here as far as gaming is concerned. How things work under the hood has changed dramatically at points, but from the perspective of a programmer, there aren’t really any new math operations here that are going to turn things on their head. RDNA of course supports Rapid Packed Math (Fast FP16), so programmers who make use of FP16 will get to enjoy those performance benefits.
With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.
The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.
Unique in consumer parts for the new 5700 series cards is support for PCI Express 4.0. Designed to go hand-in-hand with AMD’s Ryzen 3000 series CPUs, which are introducing support for the feature as well, PCIe 4.0 doubles the amount of bus bandwidth available to the card, rising from ~16GB/sec to ~32GB/sec. The real world performance implications of this are limited at this time, especially for a card in the 5700 series’ performance segment. But there are situations where it will be useful, particularly on the content creation side of matters.
Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.
The one notable change here is support for DisplayPort 1.4 Display Stream Compression. DSC, as implied by the name, compresses the image going out to the monitor to reduce the amount of bandwidth needed. This is important going forward for 4K@144Hz displays, as DP1.4 itself doesn’t provide enough bandwidth for them (leading to other workarounds such as NVIDIA’s 4:2:2 chroma subsampling on G-Sync HDR monitors). This is a feature we’ve talked off and on about for a while, and it’s taken some time for the tech to really get standardized and brought to a point where it’s viable in a consumer product.
326 Comments
View All Comments
Acreo_Aeneas - Sunday, June 30, 2019 - link
I don't think wumpus realizes that Intel owns x86 and but licenses from AMD for x86-64. Without AMD's "AMD64", Intel wouldn't exist today. AMD's designs for the past 15 years (maybe longer) aren't even based on Intel's designs. While they may have been a 2nd-tier manufacturer of Intel-based microprocessors, they haven't been for many years.BenSkywalker - Tuesday, June 11, 2019 - link
ATi dwarfed nVidia, and AMD almost bought nVidia back in the day but they weren't ok having a competent CEO at the time so the deal fell through. *BOTH* halves of the current AMD were much larger than nVidia, so we have seen with great clarity what they would do if they had every advantage.Korguz - Tuesday, June 11, 2019 - link
phynaz, the same can be said about nvidia and their own tech demo's from current to their past demosevernessince - Wednesday, June 12, 2019 - link
Um, you do realize many games use compute based shaders right?RSAUser - Tuesday, June 11, 2019 - link
AMD will have support for the DX ray tracing though?mode_13h - Tuesday, June 11, 2019 - link
Software support is one thing, dedicated hardware is another.wumpus - Tuesday, June 11, 2019 - link
So will Intel. Either one will produce a slideshow, but nifty screenshots.levizx - Wednesday, June 12, 2019 - link
The problem for NVIDIA is, when PS5 and Xbox Scarlett hit the shelves, games will use whatever AMD choose to accelerate via hardware, and that may or may not work well with NVIDIA's current design.rarson - Thursday, June 20, 2019 - link
I suspect AMD has been working on a ray tracing hardware solution similar to Tensor cores or something for some time now. They may even have considered implementing it into Navi for consumer GPUs but I imagine even if the performance was there, the increase in die size would push cost above the mid-range market these cards are targeting. There may also have been a time factor involved, ie perhaps the ray tracing performance itself wasn't quite as good as it needed to be to be viable (and AMD has pretty consistently asserted that they feel the technology isn't quite there yet and will support it when it becomes viable... although I think with some time and additional programming effort, Nvidia is finally starting to show some compelling evidence that it is in fact becoming viable).These two factors (price and performance) lead me to believe that whatever AMD has been working on will show up in the PS5/Xbox Scarlett hardware, especially since both companies (to my surprise) have mentioned ray tracing support. These are "semi-custom" designs, after all, and hardware ray tracing is a nice checkbox for a console feature. This would also explain why consumer Navi is launching so much earlier than the console hardware (I suspected the consoles would launch first, near the beginning of this year, but I was also under the assumption that they were working with Zen+ due to time constraints, so obviously I was very wrong). Given how everything has played out so far, it's clear that AMD needed to launch something as soon as possible and I suspect that due to price and performance concerns, it just made more sense to launch Navi without it.
I gotta admit, I'm a little disappointed with the lack of HDMI 2.1 though. As someone who often games on a TV, I would hate to spend $450 on a brand new GPU that won't be able to do VRR over HDMI when I finally upgrade my TV (then again, I'm not planning on doing that for a while, but still).
Spoelie - Tuesday, June 11, 2019 - link
Even beyond features, i'm disappointed by the board power. Assuming close to performance parityRTX 2070 - 12nm - 180W
5700 XT - 7nm - 225W
AMD isnt really there yet