A Quick Note on Architecture & Features

With pages upon pages of architectural documents still to get through in only a few hours, for today’s launch news I’m not going to have the time to go in depth on new features or the architecture. So I want to very briefly hit the high points on what the major features are, and also provide some answers to what are likely to be some common questions.

Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.

In terms of compute, there are not any notable feature changes here as far as gaming is concerned. How things work under the hood has changed dramatically at points, but from the perspective of a programmer, there aren’t really any new math operations here that are going to turn things on their head. RDNA of course supports Rapid Packed Math (Fast FP16), so programmers who make use of FP16 will get to enjoy those performance benefits.

With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.

The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.

Unique in consumer parts for the new 5700 series cards is support for PCI Express 4.0. Designed to go hand-in-hand with AMD’s Ryzen 3000 series CPUs, which are introducing support for the feature as well, PCIe 4.0 doubles the amount of bus bandwidth available to the card, rising from ~16GB/sec to ~32GB/sec. The real world performance implications of this are limited at this time, especially for a card in the 5700 series’ performance segment. But there are situations where it will be useful, particularly on the content creation side of matters.

Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.

The one notable change here is support for DisplayPort 1.4 Display Stream Compression. DSC, as implied by the name, compresses the image going out to the monitor to reduce the amount of bandwidth needed. This is important going forward for 4K@144Hz displays, as DP1.4 itself doesn’t provide enough bandwidth for them (leading to other workarounds such as NVIDIA’s 4:2:2 chroma subsampling on G-Sync HDR monitors). This is a feature we’ve talked off and on about for a while, and it’s taken some time for the tech to really get standardized and brought to a point where it’s viable in a consumer product.

AMD Announces Radeon RX 5700 XT & RX 5700 Addendum: AMD Slide Decks
POST A COMMENT

331 Comments

View All Comments

  • Cellar Door - Monday, June 10, 2019 - link

    Ryan - do you know if vega 56 and 64 are EOL? Reply
  • RaV[666] - Monday, June 10, 2019 - link

    Can you think of one reason to make them ?
    I mean they will be made, as in vega10 chips for datacenters but for gaming, theyre gonna have higher MUCH higher asps on 5700
    Reply
  • AshlayW - Monday, June 10, 2019 - link

    Yes Vega 10 is being made for Google Stadia gaming as they use PRO V340 cards with dual "Vega 56" GPUs. GCN still is better for Compute and HPC I think, but Vega 20 will largely succeed that in HPC. GCN is not going anywhere.

    Oh, I do not think Vega 10 is cheaper to make than Navi 10. Yes the process is mature and cheaper, but the die is almost 2X the size and you factor HBM2 and interposer cost into that and the price is largely in the same ballpark.

    Navi 10 cannot do a "V340" style card easily, or as effectively, as Google Stadia needed graphics density and the on-package memory on Vega 10 makes the overall space requirements much smaller, so yes Vega 10 is likely to be made still, and itself is much cheaper than Vega 20.
    Reply
  • mode_13h - Monday, June 10, 2019 - link

    Why would 2x Navi's be so much worse than 1x Hawaii? You're talking about 512-bits of memory data bus, in each case - just one compute die vs. two. Reply
  • AshlayW - Tuesday, June 11, 2019 - link

    What? where did you get Hawaii from? V340 uses "Vega 10" which has on-package HBM2 instead of GDDR5. That is a major advantage for space saving when putting multiple GPU packages on the same card. Reply
  • mode_13h - Wednesday, June 12, 2019 - link

    My point was that Hawaii cards use 512-bits of datapath on a single card, so perhaps 2x 256-bit Navi's can fit.

    Regarding density, I don't see your point. From a server's perspective, a PCIe card is a PCIe card, unless it's low-profile, which I don't think it is.
    Reply
  • olafgarten - Monday, June 17, 2019 - link

    They might not be using standard PCIe or maybe putting multiple chips on a single card. Either way density helps. Reply
  • Acreo_Aeneas - Sunday, June 30, 2019 - link

    Servers would likely have to use PCI-e LP or mini PCI-e. In whichever case, most servers are built more for memory I/O and storage performance and capacity (with also a focus on CPU performance) rather than on how powerful the GPU is onboard. Most servers are even headless and don't even have GPUs on board. The few that do usually use theirs for interfacing with a terminal.

    This does not include server or server farms built specifically for mutli-gpu setups. Usually that is scientific/graphics oriented or with the increasing niche of bitcoin mining.
    Reply
  • WaltC - Thursday, July 04, 2019 - link

    Pretty sure I heard Su mention that NAVI included (at the "Rdna" level) improvements to compute. We'll know in a few days, of course. (Impatience makes time drag, eh?...;)) I don't usually do this, but *provided I can buy a 5700XT for either $499 (20th anniversary) or $449 MSRP* I'll be buying one next week. I'm going to be rather ticked off if the prices for the card a grossly inflated! Here's hoping AMD will control this much better than than happened at the RX-480's debut--what makes me shudder a bit is that I just read some days ago that Bitcoin stock was on the rise again! Stadia servers are likely only using Vega now because NAVI simply wasn't available when they began. Should change in a couple of weeks, possibly. Reply
  • Ryan Smith - Monday, June 10, 2019 - link

    As far as consumer cards go, they've been drawing down inventory from the market for a couple of months now. I don't know if they've been formally discontinued, but they may as well be de-facto done for. Reply

Log in

Don't have an account? Sign up now