Today’s Review: Radeon R9 Fury X

Now that we’ve had a chance to cover all of the architectural and design aspirations of the Fiji GPU and its constituting cards, let’s get down to the business end of this article: the product we’ll be reviewing today.

Having launched last week and being reviewed today is AMD’s Radeon R9 Fury X, the company’s new flagship single-GPU video card. Featuring a fully enabled Fiji GPU, the R9 Fury X is Fiji at its finest, and a safe bet to be the grandest video card AMD releases built on TSMC’s 28nm process. Fiji is clocked high, cooled with overkill, and priced to go right up against the only GM200 GeForce card from NVIDIA that anyone cares about: the GeForce GTX 980 Ti.

AMD GPU Specification Comparison
  AMD Radeon R9 Fury X AMD Radeon R9 Fury AMD Radeon R9 290X AMD Radeon R9 290
Stream Processors 4096 (Fewer) 2816 2560
Texture Units 256 (How much) 176 160
ROPs 64 (Depnds) 64 64
Boost Clock 1050MHz (On Yields) 1000MHz 947MHz
Memory Clock 1Gbps HBM (Memory Too) 5Gbps GDDR5 5Gbps GDDR5
Memory Bus Width 4096-bit 4096-bit 512-bit 512-bit
VRAM 4GB 4GB 4GB 4GB
FP64 1/16 1/16 1/8 1/8
TrueAudio Y Y Y Y
Transistor Count 8.9B 8.9B 6.2B 6.2B
Typical Board Power 275W (High) 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.2 GCN 1.2 GCN 1.1 GCN 1.1
GPU Fiji Fiji Hawaii Hawaii
Launch Date 06/24/15 07/14/15 10/24/13 11/05/13
Launch Price $649 $549 $549 $399

With a maximum boost clockspeed of 1050MHz and with 4096 SPs organized into 64 CUs, R9 Fury X has been designed to deliver more shading/compute performance than ever before. Hawaii by comparison topped out at 2816 SPs (44 CUs), giving R9 Fury X a 1280 SP (~45%) advantage in raw shading hardware. Meanwhile as a result of scaling up the number of CUs, the number of texture units has also scaled up to 256 texture units, a new high-water mark for the number of texture units in a single GPU from any vendor.

Getting away from the CUs for a second, the R9 Fury X features less dramatic changes at its front-end and back-end relative to Hawaii. Like Hawaii, R9 Fury X features 4 geometry engines on the front-end and 64 ROPs on the back-end, so from a theoretical standpoint Fiji does not have any additional resources to work with on those portions of the rendering pipeline. That said, what the raw specifications do not cover are the architectural optimizations we have covered in past pages, which should see Fiji’s ROPs and geometry engines both perform better per unit and per clock than Hawaii’s. Meanwhile the other significant influence here is the extensive memory bandwidth enabled by using High Bandwidth Memory, which combined with a larger 2MB L2 cache should leave the ROPs far better fed on R9 Fury X than it did on AMD’s Hawaii cards.

As for High Bandwidth Memory, the next-generation memory technology gives AMD more memory bandwidth than ever before. Featuring an ultra-wide 4096-bit memory bus clocked at 1Gbps (500MHz DDR), the R9 Fury X has a whopping 512GB/sec of memory bandwidth, fed by 4GB of HBM organized in 4 stacks of 1GB each. Relative to R9 290X, this represents a 60% increase in memory bandwidth, a true generational jump that we will not see again in an AMD GPU for some number of years to come.

Consequently the performance expectations for R9 Fury X will significantly vary with the nature of the rendering workload. For pure compute workloads, between the 45% increase in SPs and 5% clockspeed increase, R9 Fury X will be up to 53% faster than the R9 290X. Meanwhile for ROP-bound scenarios the difference can be anywhere between 5% and 120%, depending on how bandwidth-bound the task is and how effective delta compression is in shrinking the bandwidth requirements. Real world expectations are 30-40% over R9 290X, depending on the game and the resolution, with R9 Fury X extending its gains at higher resolutions.

For AMD, the Radeon R9 Fury X is a critically important card for a number of reasons. From a technology perspective this is the very first HBM card, and consequently the missteps AMD makes and the lessons they learn here will be important for future generation of cards. At the same time from a competitive perspective, the importance of a flagship cannot be ignored. While flagship card sales are only a tiny part of overall card sales for NVIDIA and AMD, the PC video card industry is (in)famous for its window shopping and the emphasis put on which card holds the performance crown. Most buyers cannot (or will not) buy a card like R9 Fury X, but the sales impact of holding the crown is undeniable, as buyers as a whole will favor whoever can hold the crown. After seeing their consumer discrete market share fall to the lowest level in years, AMD is gunning to get the crown back, and the halo effect that comes from it that spurs on so many additional sales of lower-end cards.

The competition for the R9 Fury X is of course NVIDIA’s recently released GeForce GTX 980 Ti. Based on a cut-down version of NVIDIA’s GM200 GPU, the GTX 980 Ti is an odd card that comes entirely too close to their official flagship GTX Titan X in performance (~95%), to the point where although the GTX Titan X is the de jure flagship for NVIDIA, it is the GTX 980 Ti that is the de facto flagship for the company. Meanwhile, although only NVIDIA knows for sure, given the timing of the GTX 980 Ti’s launch, there is every reason to believe that the company launched it with the specific intent of countering the R9 Fury X before it even launched, so AMD does not enjoy a first-mover advantage here.

Price-wise the R9 Fury X has launched at $649, the same price as the GTX 980 Ti, so between these two cards this is a straight-up fist fight. There is no price spoiler effect in play here, the question simply comes down to which card is the better card. The only advantage for either party in this case is that NVIDIA is offering a free copy of Batman: Arkham Knight with GTX 980 Ti cards, not that the PC port of the game is an asset at this time given its poor state.

Finally, as far as launch quantities are concerned, AMD has declined to comment on how many R9 Fury X cards were available for launch. What we do know is that the cards sold out on the first day and we have yet to see a massive restocking take place yet, though at just a week post-launch restocks typically don’t come quite this soon. In any case whether due to demand, supply, or a mix of the two, the initial launch allocations of R9 Fury X did sell out, and for the moment getting another card is easier said than done.

Summer 2015 GPU Pricing Comparison
AMD Price NVIDIA
Radeon R9 Fury X $649 GeForce GTX 980 Ti
  $499 GeForce GTX 980
Radeon R9 390X $429  
Radeon R9 290X
Radeon R9 390
$329 GeForce GTX 970
Radeon R9 290 $250  
Radeon R9 380 $200 GeFroce GTX 960
Radeon R7 370
Radeon R9 270
$150  
  $130 GeForce GTX 750 Ti
Radeon R7 360 $110  
The Four Faces of Fiji, & Quantum Too Meet The Radeon R9 Fury X
Comments Locked

458 Comments

View All Comments

  • chizow - Friday, July 3, 2015 - link

    Pretty much, AMD supporters/fans/apologists love to parrot the meme that Intel hasn't innovated since original i7 or whatever, and while development there has certainly slowed, we have a number of 18 core e5-2699v3 servers in my data center at work, Broadwell Iris Pro iGPs that handily beat AMD APU and approach low-end dGPU perf, and ultrabooks and tablets that run on fanless 5W Core M CPUs. Oh, and I've upgraded also managed to find meaningful desktop upgrades every few years for no more than $300 since Core 2 put me back in Intel's camp for the first time in nearly a decade.
  • looncraz - Friday, July 3, 2015 - link

    None of what you stated is innovation, merely minor evolution. The core design is the same, gaining only ~5% or so IPC per generation, same basic layouts, same basic tech. Are you sure you know what "innovation" means?

    Bulldozer modules were an innovative design. A failure, but still very innovative. Pentium Pro and Pentium 4 were both innovative designs, both seeking performance in very different ways.

    Multi-core CPUs were innovative (AMD), HBM is innovative (AMD+Hynix), multi-GPU was innovative (3dfx), SMT was innovative (IBM, Alpha), CPU+GPU was innovative (Cyrix, IIRC)... you get the idea.

    Doing the exact same thing, more or less the exact same way, but slightly better, is not innovation.
  • chizow - Sunday, July 5, 2015 - link

    Huh? So putting Core level performance in a passive design that is as thin as a legal pad and has 10 hours of battery life isn't innovation?

    Increasing iGPU performance to the point it not only provides top-end CPU performance, and close to dGPU performance, while convincingly beating AMD's entire reason for buying ATI, their Fusion APUs isn't innovation?

    And how about the data center where Intel's *18* core CPUs are using the same TDP and sockets, in the same U rack units as their 4 and 6 core equivalents of just a few years ago?

    Intel is still innovating in different ways, that may not directly impact the desktop CPU market but it would be extremely ignorant to claim they aren't addressing their core growth and risk areas with new and innovative products.

    I've bought more Intel products in recent years vs. prior strictly because of these new innovations that are allowing me to have high performance computing in different form factors and use cases, beyond being tethered to my desktop PC.
  • looncraz - Friday, July 3, 2015 - link

    Show me intel CPU innovations since after the pentium 4.

    Mind you, innovations can be failures, they can be great successes, or they can be ho-hum.

    P6->Core->Nehalem->Sandy Bridge->Haswell->Skylake

    The only changes are evolutionary or as a result of process changes (which I don't consider CPU innovations).

    This is not to say that they aren't fantastic products - I'm rocking an i7-2600k for a reason - they just aren't innovative products. Indeed, nVidia's Maxwell is a wonderfully designed and engineered GPU, and products based on it are of the highest quality and performance. That doesn't make them innovative in any way. Nothing technically wrong with that, but I wonder how long before someone else came up with a suitable RAM just for GPUs if AMD hadn't done it?
  • chizow - Sunday, July 5, 2015 - link

    I've listed them above and despite slowing the pace of improvements on the desktop CPU side you are still looking at 30-45% improvement clock for clock between Nehalem and Haswell, along with pretty massive improvements in stock clock speed. Not bad given they've had literally zero pressure from AMD. If anything, Intel dominating in a virtual monopoly has afforded me much cheaper and consistent CPU upgrades, all of which provided significant improvements over the previous platform:

    E6600 $284
    Q6600 $299
    i7 920 $199!
    i7 4770K $229
    i7 5820K $299

    All cheaper than the $450 AMD wanted for their ENTRY level Athlon 64 when they finally got the lead over Intel, which made it an easy choice to go to Intel for the first time in nearly a decade after AMD got Conroe'd in 2006.
  • silverblue - Monday, July 6, 2015 - link

    I could swear that you've posted this before.

    I think the drop in prices were more of an attempt to strangle AMD than anything else. Intel can afford it, after all.
  • chizow - Monday, July 6, 2015 - link

    Of course I've posted it elsewhere because it bears repeating, the nonsensical meme AMD fanboys love to parrot about AMD being necessary for low prices and strong competition is a farce. I've enjoyed unparalleled stability at a similar or higher level of relative performance in the years that AMD has become UNCOMPETITIVE in the CPU market. There is no reason to expect otherwise in the dGPU market.
  • zoglike@yahoo.com - Monday, July 6, 2015 - link

    Really? Intel hasn't innovated? I really hope you are trolling because if you believe that I fear for you.
  • chizow - Thursday, July 2, 2015 - link

    Let's not also discount the fact that's just stock comparisons, once you overclock the cards as many are interested in doing in this $650 bracket, especially with AMD's clams Fury X is an "Overclocker's Dream", we quickly see the 980Ti cannot be touched by Fury X, water cooler or not.

    Fury X wouldn't have been the failure it is today if not for AMD setting unrealistic and ultimately, unattained expectations. 390X WCE at $550-$600 and its a solid alternative. $650 new "Premium" Brand that doesn't OC at all, has only 4GB, has pump whine issues and is slower than Nvidia's same priced $650 980Ti that launched 3 weeks before it just doesn't get the job done after AMD hyped it from the top brass down.
  • andychow - Thursday, July 2, 2015 - link

    Yeah, "Overclocker's dream", only overclocks by 75 MHz. Just by that statement, AMD has totally lost me.

Log in

Don't have an account? Sign up now