Briefly announced and discussed during AMD’s 2015 GPU product presentation yesterday morning was AMD’s forthcoming dual Fiji video card. The near-obligatory counterpart to the just-announced Radeon R9 Fury X, the unnamed dual-GPU card will be taking things one step further with a pair of Fiji GPUs on a single card.

Meanwhile as part of yesterday evening’s AMD-sponsored PC Gaming Show, CEO Dr. Lisa Su took the stage for a few minutes to show off AMD’s recently announced Fury products. And at the end this included the first public showcase of the still in development dual-GPU card.

There’s not too much to say right now since we don’t know its specifications, but of course for the moment AMD is focusing on size. With 4GB of VRAM for each GPU on-package via HBM technology, AMD has been able to design a dual-GPU card that’s shorter and simpler than their previous dual-GPU cards like the R9 295X2 and HD 7990, saving space that would have otherwise been occupied by GDDR5 memory modules and the associated VRMs.

Meanwhile on the card we can see that it uses a PLX 8747 to provide PCIe switching between the two GPUs and the shared PCIe bus. And on the power delivery side the card uses a pair of 8-pin PCIe power sockets. At this time no further details are being released, so we’ll have to see what AMD is up to later on once they’re ready to reveal more about the video card.

POST A COMMENT

133 Comments

View All Comments

  • DanNeely - Wednesday, June 17, 2015 - link

    Yes. Reply
  • Mr Perfect - Wednesday, June 17, 2015 - link

    Sounds cool, but wouldn't they have to do alternate frame rendering on everything then? Multi-GPU doesn't scale as well vs a monolithic die, and not at all if the drivers or game aren't configured properly. Imagine buying a $700 card and having it perform the same as the $100 one because it's only effectively using one of the GPUs. The sales would be dismal. DX12 would sort that all out of course, but seeing as developers are still hesitant to force everyone to DX11, that could take another five+ years. Reply
  • Kevin G - Wednesday, June 17, 2015 - link

    The thing with an interposer is that you don't have to limit the design to narrow buses that have to traverse relatively great distances. HBM is a prime example example of this as the 'narrow' 512 bit wide, 5.5 Ghz effective bus for GDDR5 on the R9 290X is being replaced by a 4096 bit wide, 1 Ghz effective bus. Similarly, being able to coordinate mulitple GPU's is not limited to 16x PCIe 3.0 bandwidth and topology. Want a wide, private bus to link the tessellation units together to act as one large virtual tessellation unit? That is now possible. Effectively by using to link multiple GPU dies together, it becomes feasible to link them in such a way to present them as one large virtual GPU instead of two or four independent chips. Performance wouldn't be perfectly linear but with the embarassingly parallel nature of graphics, it'd be pretty close. Reply
  • DanNeely - Wednesday, June 17, 2015 - link

    I expect DX12 uptake to be much faster than DX10/11 have been. DX10/11 only offered major eye candy gains for people with top end GPUs. Among the gaming populace as a whole (vs Anandtech readers) there are far more people playing at 1080p medium on a $100 card, 720p low on an IGP, or even what was a medium high end card when new but that's no faster than new budget cards today; than at 1080p Ultra or higher on a newish $300+ card.

    Even at the lowest feature level, DX12 offers big performance improvements across across anything that supports it; and both AMD and nVidia are committed to bringing it to their last several generations of GPU. (Has Intel said how far back they intend to offer DX12 drivers?)

    I expect being able to game 'one quality level higher' for free will drive much faster uptake of Win10/DX12 than has been the case with meaningful DX10/11 (DX10/11 on a card that isn't fast enough to use any of the new eye candy doesn't matter) has been. While legacy versions of DX will probably continue to be available at the lowest quality settings for some time, two years from now I would be surprised if high/ultra settings aren't DX12 only (cheaper to only code the effect once) on most major titles. I wouldn't be surprised if the same is also true of medium quality on some as well.
    Reply
  • tcube - Thursday, June 18, 2015 - link

    Yeah, a 1000SP GPU die, a quad CPU die and an uncore/io/nb die(as intel does with the pch) and mix and match with HBM stacks... but I would use HT(a new iteration if needed to pump up the throughput) links not cfx crp between the GPUs. If AMD would pull this off they would be able to build quite complex chips, and have distinct small teams working on distinct dies that just get plugged into the interposers as needed. They could scale up or down a GPU or CPU or reballance a chip in a matter of a month or so if competition throws them a curveball... They could even marry 28nm chips to 14nm chips and such... While at the same time have incredible yields on those chips. Reply
  • FlushedBubblyJock - Tuesday, June 23, 2015 - link

    Maybe you, tcube, and Kevin G need to send your engineer dev credos to AMD so they get on with it, or better yet you two could be CO-CEO's and run the company correctly as soon as ET verifies your joint fantasies. Reply
  • Aikouka - Wednesday, June 17, 2015 - link

    It's limited by the memory bus. You can toss as many memory chips on there as you desire, but unless you have any I/O to tie the traces to, they're quite literally useless. Reply
  • Gigaplex - Wednesday, June 17, 2015 - link

    The large number of tracks required to connect the stacks is the limiting factor. Future products will use higher capacity stacks. Reply
  • Lolimaster - Thursday, June 18, 2015 - link

    They will will ship an 8GB Fury X, same 4 stacks but with double the density (256 --> 512MB) Reply
  • CPUGPUGURU - Wednesday, June 17, 2015 - link

    What good is saving space when it needs a massive space wasting radiator? 4GB of VRAM is too little to be considered 4K gaming future proof.

    28nm GPUs will never come close to using HBM bandwidth, HBM just isn't needed until the next process node 16nm, GDDR5 is not bottlenecking 28nm GPUs so 4GB HBM1 is a marketing gimmick that's 4K gaming insufficient.
    Reply

Log in

Don't have an account? Sign up now