The AMD Radeon R9 295X2 Reviewby Ryan Smith on April 8, 2014 8:00 AM EST
- Posted in
- Radeon 200
Although the days of AMD’s “small die” strategy have long since ended, one aspect of AMD’s strategy that they have stuck with since the strategy’s inception has been the concept of a dual-GPU card. AMD’s first modern dual-GPU card, the Radeon HD 3870 X2 (sorry, Rage Fury MAXX), came at a low point for the company where such a product was needed just to close the gap between AMD’s products and NVIDIA’s flagship big die video cards. However with AMD’s greatly improved fortune these days, AMD no longer has to play just to tie but they can play to win. AMD’s dual-GPU cards have evolved accordingly and these days they are the high-flying flagships of AMD’s lineup, embodying the concept of putting as much performance into a single card as is reasonably possible.
The last time we took a look at a new AMD dual-GPU video card was just under a year ago, when AMD launched the Radeon HD 7990. Based on AMD’s then-flagship Tahiti GPUs, the 7990 was a solid design that offered performance competitive with a dual card (7970GHz Edition Crossfire) setup while fixing many of the earlier Radeon HD 6990’s weaknesses. However the 7990 also had its shares of weaknesses and outright bad timing – it came just 2 months after NVIDIA released their blockbuster GeForce GTX Titan, and it also launched at a time right when the FCAT utility became available, enabling reliable frame pacing analysis and exposing the weak points in AMD’s drivers at the time.
Since then AMD has been hard at work on both the software and hardware sides of their business, sorting out their frame pacing problems but also launching new products in the process. Most significant among these was the launch of their newer GCN 1.1 Hawaii GPU, and the Radeon R9 290 series cards that are powered by it. Though Tahiti remains in AMD’s product stack, Hawaii’s greater performance and additional features heralded the retail retirement of the dual-Tahiti 7990, once again leaving an opening in AMD’s product stack.
That brings us to today and the launch of the Radeon R9 295X2. After much consumer speculation and more than a few teasers, AMD is releasing their long-awaited Hawaii-powered entry to their dual-GPU series of cards. With Hawaii AMD has a very powerful (and very power hungry) GPU at their disposal, and for its incarnation in the R9 295X2 AMD is going above and beyond anything they’ve done before, making it very clear that they’re playing to win.
|AMD GPU Specification Comparison|
|AMD Radeon R9 295X2||AMD Radeon R9 290X||AMD Radeon HD 7990||AMD Radeon HD 7970 GHz Edition|
|Stream Processors||2 x 2816||2816||2 x 2048||2048|
|Texture Units||2 x 176||176||2 x 128||128|
|ROPs||2 x 64||64||2 x 32||32|
|Memory Clock||5GHz GDDR5||5GHz GDDR5||6GHz GDDR5||6GHz GDDR5|
|Memory Bus Width||2 x 512-bit||512-bit||2 x 384-bit||384-bit|
|VRAM||2 x 4GB||4GB||2 x 3GB||3GB|
|Transistor Count||2 x 6.2B||6.2B||2 x 4.31B||4.31B|
|Typical Board Power||500W||250W||375W||250W|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm||TSMC 28nm|
|Architecture||GCN 1.1||GCN 1.1||GCN 1.0||GCN 1.0|
Starting with a brief look of the specifications, much of the Radeon R9 295X2’s design, goals, and performance can be observed in the specifications alone. Whereas the 7990 was almost a 7970GE Crossfire setup on a single card, AMD is not making any compromises for the R9 295X2, equipping the card with a pair of fully enabled Hawaii GPUs and then clocking them even higher than their single-GPU flagship, the R9 290X. As a result unlike AMD’s past dual-GPU cards, which made some performance tradeoffs in the name of power consumption and heat, AMD’s singular goal with the R9 295X2 is to offer the complete performance of a R9 290X “Uber” Crossfire setup on a single card.
Altogether this means we’re looking at a pair of top-tier Hawaii GPUs, each with their full 2816 SPs and 64 ROPs enabled. AMD has set the boost clock on these GPUs to 1018MHz – just 2% faster than the 290X – which means performance is generally a wash compared to the R9 290X in CF, but none the less offering just a bit more performance that should offset the penalties from the additional latency the necessary PCIe bridge chip introduces. Otherwise compared to the retired 7990, the R9 295X2 should be a far more capable card, offering 40% more shading/texturing performance and 2x the ROP throughput of AMD’s previous flagship. Like the R9 290X compared to the 7970GHz we’re still looking at what are fundamentally parts from the same generation and made on the same 28nm process, so AMD doesn’t get the benefits of a generational improvement in architectures and manufacturing, but even within the confines of 28nm AMD has been able to do quite a bit with Hawaii to improve their performance over Tahiti based products.
Meanwhile AMD is taking the same no-compromises strategy when it comes to memory. The R9 290X was equipped with 4GB of 5GHz GDDR5, operating on a 512-bit memory bus, and for the R9 295X2 in turn each GPU is getting the same 4GB of memory on the same bus. The fact that AMD has been able to lay down 1024 GDDR5 memory bus lines on a single board is no small feat (the wails of the engineers can be heard for miles), and while it is necessary to keep up with the 290X we weren’t entirely sure if AMD was going to be able and willing to pull it off. Nonetheless, the end result is that each GPU gets the same 320GB/sec as the 290X does, and compared to the 7990 this is an 11% increase in memory bandwidth, not to mention a 33% increase in memory capacity.
Now as can be expected by any card labeled a “no compromises” card by its manufacturer, all of this performance does come at a cost. Hawaii is a very powerful GPU but it is also very power hungry; AMD has finally given us an official Typical Board Power (TBP) for the R9 290X of 250W, and with R9 295X2 AMD is outright doubling it. R9 295X2 is a 500W card, the first 500W reference card from either GPU manufacturer.
As one can expect, moving 500W of heat is no easy task. AMD came close once before with the 6990 – a card designed to handle up to 450W in its AUSUM mode – but the 6990 was dogged by the incredibly loud split blower AMD needed to use to cool the beast. For the 7990 AMD dropped their sights and their power target to just 375W, and at the same time went to a large open air blower that allowed them to offer a dual-GPU card with reasonable noise levels. But for the R9 295X2 AMD is once again turning up the heat, requiring new methods of cooling if they want to offer 500W of cooling while maintaining reasonable noise levels.
To dissipate 500W of heat AMD has moved past blowers and even open air coolers, and moved on to a closed loop liquid cooler (CLLC). We’ll cover AMD’s cooling apparatus in more detail when we take a closer look at the construction of the R9 295X2, but as with AMD’s 500W target AMD is charting new territory for a reference card by making a CLLC the baseline cooler. With two Asetek pumps and a 120mm radiator to dissipate heat, the R9 295X2 is a significant departure from AMD’s past designs and an equally significant change in the traditionally conservative system requirements for a reference card.
In any case, the fact that AMD went this route isn’t wholly surprising – there aren’t too many ways to move 500W of heat – but the lack of significant binning did catch us off guard. Dual-GPU cards are often (but not always) using highly binned GPUs to further contain power consumption, which isn’t something AMD has done this time around, and hence the reason for the R9 295X2’s doubled power consumption. So long as AMD can remove the heat then they’ll be fine, and from our test results it’s clear that AMD has definitely done some binning, but none the less it’s interesting that we aren’t seeing as aggressive binning here as in past years.
Finally, let’s dive into pricing, availability, and competition. Given the relatively exotic cooling requirements for the R9 295X2, it comes as no great surprise that AMD is targeting the same luxury video card crowd that the GTX Titan pioneered last year when it premiered at $1000. This means using more expensive cooling devices, a greater emphasis on build quality with a focus on metal shrouding, and a few gimmicks to make the card stand out in windowed cases. To that end the R9 295X2 will by its very nature be an extremely low volume part, but if AMD has played their cards right it will be the finest card they have ever built.
The price for that level of performance and quality on a single card will be $1499 (€1099 + VAT), $500 higher than the 7990’s $999 launch price, and similarly $500 higher than NVIDIA’s closest competitor, the GTX Titan Black. With two R9 290Xs running for roughly $1200 at current prices, we’ve expected for some time that a dual-GPU Hawaii card would be over $1000, so AMD isn’t too far off from our expectations. Ultimately AMD’s $1500 price tag amounts to a $300 premium for getting two 290Xs on to a single card, along with the 295X2’s much improved build quality and more complex cooling apparatus. Meanwhile GPU complexity and heat density has reached a point where the cost of putting together a dual-GPU card is going to exceed the cost of a single card, so these kinds of dual-GPU premiums are going to be here to stay.
As always, the R9 295X2’s competition will be a mix of dual video card setups such as dual R9 290Xs and dual GTX 780 Tis, and of course NVIDIA’s forthcoming dual-GPU card. When it comes to dual video card setups the latter will always be cheaper than a single dual-GPU card, so the difference lies in the smaller space requirements of a single video card and the power/heat/noise savings that such a card provides. In the AMD ecosystem the reference 290X is dogged by its loud reference cooler, so as we’ll see in our test results the R9 295X2 will have a significant advantage over the 290X when it comes to noise.
Meanwhile in NVIDIA’s ecosystem, NVIDIA has the dual GTX 780 Ti, the dual GTX Titan Black, and the GTX Titan Z. The dual GTX 780 Ti is going to be closest competitor to the R9 295X2 at roughly $1350, with a pair of GTX Titan Blacks carrying both a performance edge and a significant price premium. As for the GTX Titan Z, NVIDIA’s forthcoming dual-GPU card is scheduled to launch later this month, and while it should be a performance powerhouse it’s also going to retail at $3000, twice the price of the R9 295X2. So although the GTX Titan Z can be used for gaming, we’re expecting it to be leveraged more for its compute performance than its gaming performance. In any case based on NVIDIA’s theoretical performance figures we have a strong suspicion that the GTX Titan Z is underclocked for TDP reasons, so it remains to be seen whether it’s even gaming performance competitive with the R9 295X2.
For availability the R9 295X2 will be a soft launch for AMD, with AMD announcing the card 2 weeks ahead of its expected retail date. AMD tells us that the card should start appearing at retailers and in boutique systems on the week of April 21st, and while multiple AMD partners will be offering this card we don’t have a complete list of partners at this time (but expect it to be a short list). The good news is that unlike most of AMD’s recent product launches, we aren’t expecting availability to be a significant problem. Due to the price premium over a pair of 290Xs and recent drops in cryptocoin value, it’s unlikely that miners will want the 295X2, meaning the demand and customer base should follow the more traditional gamer demand curves.
Finally, it’s worth noting that unlike the launch of the 7990, AMD isn’t doing any game bundle promotions for the R9 295X2. AMD hasn’t been nearly as aggressive on game bundles this year, and in the case of the R9 295X2 there isn’t a specific product (e.g. GTX Titan) that AMD needs to counter. Any pack-in items – be it games or devices – will be the domain of the board partners this time around. Also, the AMD Mystery Briefcase was just a promotional item, so partners won’t be packing their retail cards quite so extravagantly.
|Spring 2014 GPU Pricing Comparison|
|$3000||GeForce GTX Titan Z|
|Radeon R9 295X2||$1500|
|$1100||GeForce GTX Titan Black|
|$700||GeForce GTX 780 Ti|
|Radeon R9 290X||$600|
|Radeon R9 290||$500||GeForce GTX 780|
Post Your CommentPlease log in or sign up to comment.
View All Comments
Dupl3xxx - Wednesday, April 9, 2014 - link$2k+ for a 4k screen? where are you wasing your money? In norway, you can get a 4k screen for just about 5kNOK, or just about 850USD, including tax! also, why would you need a $1500 CPU, whene the 4930k is 200MHz slower, for half the price?
Also, WHY would you want 32GB of 2400MHz ram!?!?!?! There is next to no improvement over 1600MHz!
As far as SSD's goes, a single samsung 250/500GB should be plenty, you got 32GB of ram to use as buffer!
And if you want a "tight" system with insane preformance, the 295x2 is the best choice ATM. Double the 290x preformance, "half" the size.
lehtv - Wednesday, April 9, 2014 - linkAnother difference is the way this card handles heat compared to any 290X CF setup apart from custom water cooling. The CLLC combines the benefits of reference GPUs - the ability to exhaust hot air externally rather than into the case - with the benefits of third party cooling - the ability to keep temperatures and noise levels lower than those of reference blower cards. A 290X crossfire setup using reference cooling is not even worth considering for anyone who cares about noise output, while third party 290X crossfire is restricted to cases with enough cooling capacity to handle the heat.
Supersonic494 - Friday, April 11, 2014 - linkYou are right, but keep in mind on big limitation with normal crossfire/SLI is the space taken up by 2 big dual slot GPUs, with this it is only one slot; however other than that you might as well get 2 290x's
bj_murphy - Friday, April 11, 2014 - linkDual GPU doesn't have the requirement for 2 PCI-E slots; you can't do SLI/Crossfire in a Mini-ITX system for example.
HalloweenJack - Tuesday, April 8, 2014 - linkmuppet - 20w more in furmark , and 160 in games - not hundreds more. keep drinking the ananadtech koolaid.
WaltC - Tuesday, April 8, 2014 - linkInteresting. [H] seems to have done some pretty thorough testing, and the AMD card blows by 780Ti SLI in every single case. Of course, [H] is testing @ 4k resolutions/3-way Eyefinity exclusively--but that's where anyone who shells out this kind of money is going to be. 1080P? Don't make me laugh...;)
WaltC - Tuesday, April 8, 2014 - linkCan't edit, so I'll just say I don't know where "1080P" came from...;)
lwooood - Tuesday, April 8, 2014 - linkApologies for going slightly OT. Is there any indication when AMD fills in the middle of their product stack with GCN 1.1 parts?
sascha - Tuesday, April 8, 2014 - linkI like to know that, too!
MrSpadge - Tuesday, April 8, 2014 - linkI would say that indication is 20 nm chips, at the end of the year the earliest.