At AMD’s GDC 2016 “Capsaicin” event, the company has announced their long-awaited dual Fiji video card. Being released under the name Radeon Pro Duo, the new card is a departure from the usual for AMD, with the company specifically targeting it towards VR content creation rather than end-user gaming.

AMD originally teased their then in-development dual-Fiji card back at the Fiji launch event in June of 2015. At the time the card was expected to launch towards the end of 2015 as the company’s flagship gaming card. However at AMD’s Polaris event in December, the company announced that they were realigning the card to focus on the VR market, and would be holding it back to 2016 to launch alongside the major VR headsets.

Officially AMD’s commentary was limited reiterating their desire to have the card tied to the VR industry. However I believe that AMD also delayed the card due to the poor state of AFR scaling in recent AAA games, which would make a dual-GPU card a hard sale in the typical PC gaming market. VR, by contrast, is a much better fit since through technologies such as AMD’s affinity multi-GPU, the two perspectives that need to be rendered for VR can be mapped directly to each GPU, avoiding AFR’s dependency and pacing issues.

In any case, with the launch of the major VR headsets finally upon us, AMD is formally unveiling their dual Fiji card, the Radeon Pro Duo. That AMD is still not going after the consumer market means they have once again defied expectations, but first let’s take a look at the specs as we know them so far.

AMD GPU Specification Comparison
  AMD Radeon Pro Duo AMD Radeon R9 Fury X AMD Radeon R9 Fury AMD Radeon R9 295X2
Stream Processors 2 x 4096 4096 3584 2 x 2816
Texture Units 2 x 256 256 224 2 x 176
ROPs 2 x 64 64 64 2 x 64
Boost Clock 1000MHz 1050MHz 1000MHz 1018MHz
Memory Clock 1Gbps HBM 1Gbps HBM 1Gbps HBM 5Gbps GDDR5
Memory Bus Width 2 x 4096-bit 4096-bit 4096-bit 2 x 512-bit
VRAM 2 x 4GB 4GB 4GB 2 x 4GB
FP64 1/16 1/16 1/16 1/8
TrueAudio Y Y Y Y
Transistor Count 2 x 8.9B 8.9B 8.9B 2 x 6.2B
Typical Board Power 350W 275W 275W 500W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.2 GCN 1.2 GCN 1.2 GCN 1.1
GPU Fiji Fiji Fiji Hawaii
Launch Date Q2 2016 06/24/2015 07/14/2015 04/21/2014
Launch Price $1499 $649 $549 $1499

Officially, AMD promotes the Radeon Pro Duo as having 16 TFLOPS of performance; unsurprisingly this translates to two fully enabled Fiji GPUs, clocked at around 1GHz. This puts the maximum performance of the card very close to a Fury X Crossfire setup, however there is still the matter of TDP to get to.

Otherwise as this is Fiji, the rest of the specifications should not come as a surprise. Doubling up on Fijis gives us 64 ROPs and 256 texture units per GPU. And 4GB of HBM per GPU, clocked at 1Gbps for an effective memory bandwidth of 512GB/sec/GPU.

Meanwhile on power consumption, though not in the initial press release, AMD has confirmed that the card is configured for a typical board power of 350W. In practice what this means is that the card is not too far removed from two R9 Nanos glued together, seeing as how R9 Nano had a TBP of 175W. However this also means that like the R9 Nano, the card is expected to power throttle under most heavy tasks, and the full 1GHz clockspeed will rarely see action. Again if it’s anything like the single Nano, this would put the average clockspeeds at around 875MHz, so it’s not quite a single card Fury X Crossfire analog.

Even with the 350W TBP, the shots of the card in AMD’s press materials all show 3 8-pin PCIe sockets, which would put the maximum power draw as officially allowed by the PCIe specification at 525W. Given the target professional market, all signs point to AMD opting to be conservative here and stick to the PCIe specification rather than risk pulling too much power over two sockets. These matters are rarely important to consumers, but it matters a lot to OEMs who may bundle the card, or at least offer it as an option. Meanwhile this also means that the Radeon Pro Duo is set to consume quite a bit less power than AMD’s previous generation dual-GPU card, the Radeon R9 295X2. That card was rated for 500W and could come very close to actually drawing that.

With the 350W TBP, AMD has once again resorted to a closed loop liquid cooler setup in order to keep the card at two slots wide. We don’t have detailed specifications for the radiator, but it is a single 120mm radiator, and it looks quite similar to the Fury X’s. The Fury X was essentially overbuilt in this regard, so it’s easy to see why AMD wouldn’t need a larger CLLC. This also means that the Radeon Duo Pro improves upon the R9 295X2 in a small but important way: everything is cooled by the radiator; there isn’t a small fan at the center on top of all of this.

As for display I/O, AMD’s official specs list 4x DisplayPort. However based on the admittedly limited pictures AMD has released, I believe this is an error on their part. The 4th port on the card looks a great deal like an HDMI port, which would make a lot of sense to have in place since HDMI is needed to drive the HTC Vive and Oculus Rift.

But perhaps the bigger news though isn’t the specifications, but the target market for the card. While I had initially expected AMD to target the card at the VR consumer market, AMD has gone in a different direction. Rather the Radeon Pro Duo is being pitched as a content creation card, making this an unusual halfway point between a Radeon and a FirePro.

As I’m writing this up in advance I haven’t heard AMD’s formal reasoning for why they aren’t heavily promoting it for the consumer market – though clearly the card will work there – but after giving it some thought I suspect it has to do with the system requirements for VR gaming. Both Oculus and Valve are pushing the idea that a Radeon R9 290/GeForce GTX 970 should be the performance level VR games are designed around. If developers actually follow through on this, then having a faster card is not especially useful since VR displays are locked to v-sync and can’t exceed their cap. If a 290 delivers 90fps, what would a Pro Duo when developers are targeting a fixed level of quality?

In which case content creation is the next best thing to do with the card. Games under development have yet to be tuned for performance, so it’s sound reasoning that developers would want something as fast as possible to do their initial development on. The catch for AMD is that this does limit the market for the card; besides the high price tag, the market for developers is much smaller than the market for consumers.

And since it’s a content creation card, it will also be receiving special driver attention from AMD. The Radeon Pro Duo will still use the Radeon driver set, but the drivers for it will be validated for a selection of major content creation applications (modeling, animation, etc). Validated drivers won’t come down the pipeline until a month or so after the card launches, but given AMD’s workstation aspirations here, they want to give users a FirePro-lite experience and not force users into picking between a powerful card and drivers that the major tool developers will support.

Finally, let’s talk pricing and availability. AMD has announced that the card will retail for $1499. This is the same price that the Radeon R9 295X2 launched at in 2014, however it’s more than double the price of a pair of Fury Xes, so pricing is arguably not aggressive there. On the other hand it’s more compact than a pair of Fury Xes (or even a pair of Nanos), so there is the space argument to be made, and as AMD’s positioning makes clear this is first and foremost a development card to begin with. Meanwhile the Pro Duo will be shipping in “early Q2 2016”, which means we should see it become available in the next one to two months.

POST A COMMENT

56 Comments

View All Comments

  • Shadow7037932 - Monday, March 14, 2016 - link

    Yeah, with Polaris and Pascal coming Soon™ I don't see this being relevant for very long. Reply
  • hansmuff - Monday, March 14, 2016 - link

    I have no clue why anyone would buy this. VR at this time is specifically not ready for Multi-GPU. 8GB cards are pushing 4GB out of the way and whether or not that's great, on a $1500 monster I sure would like to see larger than 4GB.

    And what is VR specific here, short of the name? What specifically was done to make this a "VR" card?
    Reply
  • JKay6969AT - Monday, March 14, 2016 - link

    One GPU for each eye rendering is pretty VR-centric (As an Oculus DK2 owner I'd love that as VR Rendering is far more taxing than the usual 1080p screen rendering)

    I would say that 2x Fury X's would be fine for everyone else except the small number of people who have only one PCIe slot and could only fit one FULL LENGTH card into their system.

    This is AMD trying to get a decent Return On Investment on something, anything and by golly they deserve it. For years AMD has been fighting with nVidia which has SERIOUSLY hurt their profit margins and minimised how much nVidia could price gouge the shit out of the GPU market, $1000 for a Titan?!? and you complain that AMD is charging $1499 for a dual watercooled GPU featuring HBM @ 2x 4096bit memory bandwidth!!!! Gimmie a break! AMD just can't win, can they?
    Reply
  • hansmuff - Monday, March 14, 2016 - link

    They probably can win, I don't know. Last time I really saw them "win" I bought a TBird 1.4GHz CPU. But that wasn't my point.

    The one GPU per eye is not at all a given yet, that is to say, it's not available, so my question is relevant. They allegedly have developed custom software that lets the 2nd GPU render the 1st GPUs output on an outside monitor, so that's good for VR production. That's the ONLY use case I've seen so far that's actually speaking to this product and relevant.
    Reply
  • tuxfool - Monday, March 14, 2016 - link

    "The one GPU per eye is not at all a given yet, that is to say, it's not available"
    It is part of the LiquidVR API. So it is available. The steam VR performance tester supports this.
    Reply
  • JKay6969AT - Monday, March 14, 2016 - link

    Exactly, and doubly so that even if it was a brand new feature of this card it would prove it was a card for VR, the fact is the feature will work for us peasants who can 'only' afford 2x Fury X's in Crossfire and not just this one card making it an even more relevant feature in the long run.

    I have been really disappointed with the VR Gaming performance of my rig using my DK2 even though I have an intel Core i7 4790K @4GHz, 16GB of G-Skill DDR3 RAM @ 2400MHz with a Powercolor PCS+ OC R9 290X 4GB Graphics card. This rig plays all my games just fine in 1080p on a screen but struggles with Project Cars in VR. I have considered Crossfire as a solution but was waiting until the next gen cards like Pascal and Polaris came out hoping that they would be more energy efficient meaning I wouldn't need a 1500W+ PSU.

    The consumer Rift and HTC Vive have 2x 1920x1200 screens @ 90Hz in them, the DK2 has one 1080p screen running at 75Hz and my system struggles to render smoothly. I think there are going to be a whole lot of people disappointed with their shiny new consumer VR headset if they have a comparable rig to mine, goodness knows how they will feel if they have lower specs...
    Reply
  • Kutark - Tuesday, March 15, 2016 - link

    Except the Titan Black came out 3 month early, has 3x as much RAM, was faster, and approx. $200-300 more. So, while I'm not arguing to say an $1100 video card is a good value, comparatively speaking, to get that much performance that much earlier than a Fiji (again assuming you're going to spend that much in the first place) was probably worth it.

    Never the less, I do take some issue with the idea that NVidia is "price gouging". If they were charging or trying to charge say $800 for a 980 (not a Ti or Titan) then yes, that would be price gouging, but when you look at inflation, and the fact that the x80 series cards were releasing around the $500-550 mark for almost a decade now, they've actually gotten cheaper.

    For example, you can see in the past with the 280's, they *did* try to price gouge, they wanted $649 for a 280, but dropped it to $500 within a few days of release because of competition from AMD.

    So I'm not saying NVidia hasn't done it before, but they really haven't done any serious level of price gouging for a good long while now.

    Now, if AMD doesn't continue to offer good competition and basically gives them a monopoly of "nobody else can make a good product", then, my guess is we will see a lot of that.
    Reply
  • iamkyle - Monday, March 14, 2016 - link

    New from Sony - the 4GB HBM Radeon Memory Stick Pro Duo! Reply
  • Praze - Monday, March 14, 2016 - link

    The only spec I question on your predicted table here is the FP64 performance. As a prosumer marketed card, it would make more sense for it to land somewhere between the FirePro (1:4+) and Radeon cards (1:16).

    The 295x2 had 1:8, which seems like a nice number here. Then again, Nvidia neutered the Titan cards after the Titan Black, so maybe AMD's similarly trying to preserver their FirePro price tags.
    Reply
  • SunnyNW - Monday, March 14, 2016 - link

    I believe it has to do with hardware limitations (in both cases, for AMD and Nvidia). That is for sure the case with the TItan X due to it utilizing the Maxwell architecture. Reply

Log in

Don't have an account? Sign up now