To say it’s been a busy month for AMD is probably something of an understatement. After hosting a public GPU showcase in Hawaii just under a month ago, the company has already launched the first 5 cards in the Radeon 200 series – the 280X, 270X, 260X, 250, and 240 – and AMD isn’t done yet. Riding a wave of anticipation and saving the best for last, today AMD is finally launching the Big Kahuna: the Radeon R9 290X.

The 290X is not only the fastest card in AMD’s 200 series lineup, but the 290 series in particular also contains the only new GPU in AMD’s latest generation of video cards. Dubbed Hawaii, with the 290 series AMD is looking to have their second wind between manufacturing node launches. By taking what they learned from Tahiti and building a refined GPU against a much more mature 28nm process – something that also opens the door to a less conservative design – AMD has been able to build a bigger, better Tahiti that continues down the path laid out by their Graphics Core Next architecture while bringing some new features to the family.

Bigger and better isn’t just a figure of speech, either. The GPU really is bigger, and the performance is unquestionably better. After vying with NVIDIA for the GPU performance crown for the better part of a year, AMD fell out of the running for it earlier this year after the release of NVIDIA’s GK110 powered GTX Titan, and now AMD wants that crown back.

AMD GPU Specification Comparison
  AMD Radeon R9 290X AMD Radeon R9 280X AMD Radeon HD 7970 AMD Radeon HD 6970
Stream Processors 2816 2048 2048 1536
Texture Units 176 128 128 96
ROPs 64 32 32 32
Core Clock 727MHz? 850MHz 925MHz 880MHz
Boost Clock 1000MHz 1000MHz N/A N/A
Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 5.5GHz GDDR5
Memory Bus Width 512-bit 384-bit 384-bit 256-bit
VRAM 4GB 3GB 3GB 2GB
FP64 1/8 1/4 1/4 1/4
TrueAudio Y N N N
Transistor Count 6.2B 4.31B 4.31B 2.64B
Typical Board Power ~300W (Unofficial) 250W 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Architecture GCN 1.1 GCN 1.0 GCN 1.0 VLIW4
GPU Hawaii Tahiti Tahiti Cayman
Launch Date 10/24/13 10/11/13 12/28/11 12/15/10
Launch Price $549 $299 $549 $369

We’ll dive into the full architectural details of Hawaii a bit later, but as usual let’s open up with a quick look at the specs of today’s card. Hawaii is a GCN 1.1 part – the second such part from AMD – and because of that comparisons with older GCN parts are very straightforward. For gaming workloads in particular we’re looking at a GCN GPU with even more functional blocks than Tahiti and even more memory bandwidth to feed it, and 290X performs accordingly.

Compared to Tahiti, AMD has significantly bulked up both the front end and the back end of the GPU, doubling each of them. The front end now contains 4 geometry processor and rasterizer pairs, up from 2 geometry processors tied to 4 rasterizers on Tahiti, while on the back end we’re now looking at 64 ROPs versus Tahiti’s 32. Meanwhile in the computational core AMD has gone from 32 CUs to 44, increasing the amount of shading/texturing hardware by 38%.

On the other hand GPU clockspeeds on 290X are being held consistent versus the recently released 280X, with AMD shipping the card with a maximum boost clock of 1GHz (they’re unfortunately still not telling us the base GPU clockspeed), which means any significant performance gains will come from the larger number of functional units. With that in mind we’re looking at a video card that has 200% of 280X’s geometry/ROP performance and 138% of its shader/texturing performance. In the real world performance will trend closer to the increased shader/texturing performance – ROP/geometry bottlenecks don’t easily scale out like shading bottlenecks – so for most scenarios the upper bound for performance increases is that 38%.

Meanwhile the job of feeding Hawaii comes down to AMD’s fastest memory bus to date. With 280X and other Tahiti cards already shipping with a 384-bit memory bus running at 6GHz – and consuming quite a bit of die space to get there – to increase their available memory bandwidth AMD has opted to rebalance their memory configuration in favor of a wider, lower clockspeed memory bus. For Hawaii we’re looking at a 512-bit memory bus paired up with 5GHz GDDR5, which brings the total amount of memory bandwidth to 320GB/sec. The reduced clockspeed means that AMD’s total memory bandwidth gains aren’t quite as large as the increase in the memory bus size itself, but compared to the 288GB/sec on 280X this is still an 11% increase in memory bandwidth and a move very much needed to feed the larger number of ROPs that come with Hawaii. More interesting however is that in spite of the larger memory bus the total size of AMD’s memory interface has gone down compared to Tahiti, and we’ll see why in a bit.

At the same time because AMD’s memory interface is so compact they’ve been able to move to a 512-bit memory bus without requiring too large a GPU. At 438mm2 and composed of 6.2B transistors Hawaii is still the largest GPU ever produced by AMD – 18mm2 bigger than R600 (HD 2900) – but compared to the 365mm2, 4.31B transistor Tahiti AMD has been able to pack in a larger memory bus and a much larger number of functional units into the GPU for only a 73mm2 (20%) increase in die size. The end result being that AMD is able to once again significantly improve their efficiency on a die size basis while remaining on the same process node. AMD is no stranger to producing these highly optimized second wind designs, having done something similar for the 40nm era with Cayman (HD 6900), and as with Cayman the payoff is the ability to increase performance an efficiency between new manufacturing nodes, something that will become increasingly important for GPU manufacturers as the rate of fab improvements continues to slow.

Moving on, let’s quickly talk about power consumption. With Hawaii AMD has made a number of smaller changes both to the power consumption of the silicon itself, and how it is defined. On the tech side of matters AMD has been able to reduce transistor leakage compared to Tahiti, directly reducing power consumption of the GPU as a result, and this is being paired with changes to certain aspects of their power management system, with implementing advanced power/performance management abilities that vastly improve the granularity of their power states (more on this later).

However at the same time how power consumption is being defined is getting far murkier: AMD doesn’t list the power consumption of the 290X in any of their documentation or specifications, and after asking them directly we’re only being told that the “average gaming scenario power” is 250W. We’ll dive into this more when we do a breakdown of the changes to PowerTune on 290X, but in short AMD is likely underreporting the 290X’s power consumption. Based on our test results we’re seeing 290X draw more power than any other “250W” card in our collection, and in reality the TDP of the card is almost certainly closer to 300W. There are limits to how long the card can sustain that level of power draw due to cooling requirements, but given sufficient cooling the power limit of the card appears to be around 300W, and for the moment we’re labeling it as such.


Left To Right: 6970, 7970, 290X

Finally, let’s talk about pricing, availability, and product positioning. As AMD already launched the rest of the 200 series 2 weeks ago, the launch of the 290X is primarily filling out the opening at the top of AMD’s product lineup that the rest of the 200 series created. The 7000 series is in the middle of its phase out – and the 7990 can’t be too much farther behind – so the 290X is quickly going to become AMD’s de-facto top tier card.

The price AMD will be charging for this top tier is $549, which happens to be the same price as the 7970 when it launched in 2012. This is about $100-$150 more expensive than the outgoing 7970GE and $250 more expensive than 280X, with the 290X offering an average performance increase over 280X of 30%. Meanwhile when placed against NVIDIA’s lineup the primary competition for 290X will be the $650 GeForce GTX 780, a card that the 290X can consistently beat, making AMD the immediate value proposition at the high-end. At the same time however NVIDIA will have their 3 game Holiday GeForce Bundle starting on the 28th, making this an interesting inversion of earlier this year where it was AMD offering large game bundles to improve the competitive positioning of their products versus NVIDIA’s. As always, the value of bundles are ultimately up to the buyer, especially in this case since we’re looking at a rather significant $100 price gap between the 290X and the GTX 780.

Finally, unlike the 280X this is going to be a very hard launch. As part of their promotional activities for the 290X retailers have already been listing the cards while other retailers have been taking pre-orders, and cards will officially go on sale tomorrow. Note that this is a full reference launch, so everyone will be shipping identical reference cards for the time being. Customized cards, including the inevitable open air cooled ones, will come later.

Fall 2013 GPU Pricing Comparison
AMD Price NVIDIA
  $650 GeForce GTX 780
Radeon R9 290X $550  
  $400 GeForce GTX 770
Radeon R9 280X $300  
  $250 GeForce GTX 760
Radeon R9 270X $200  
  $180 GeForce GTX 660
  $150 GeForce GTX 650 Ti Boost
Radeon R7 260X $140  

 

A Bit More On Graphics Core Next 1.1
Comments Locked

396 Comments

View All Comments

  • Antiflash - Thursday, October 24, 2013 - link

    I've usually prefer Nvidia Cards, but they have it well deserved when decided to price GK110 to the stratosphere just "because they can" and had no competition. That's poor way to treat your customers and taking advantage of fanboys. Full implementation of Tesla and Fermi were always priced around $500. Pricing Keppler GK110 at $650+ was stupid. It's silicon after all, you should get more performance for the same price each year. Not more performance at a premium price as Nvidia tried to do this generation. AMD is not doing anything extraordinary here they are just not following nvidia price gouging practices and $550 is their GPU at historical market prices for their flagship GPU. We would not have been having this discussion if Nvidia had done the same with GK110.
  • blitzninja - Saturday, October 26, 2013 - link

    OMG, why won't you people get it? The Titan is a COMPUTE-GAMING HYBRID card, it's for professionals who run PRO apps (ie. Adobe Media product line, 3D Modeling, CAD, etc) but are also gamers and don't want to have SLI setups for gaming + compute or they can't afford to do so.

    A Quadro card is $2500, this card has 1 less SMX unit and no PRO customer driver support but is $1000 and does both Gaming AND Compute, as far as low-level professionals are concerned this thing is the very definition of steal. Heck, you SLI two of these things and you're still up $500 from a K6000.

    What usually happens is the company they work at will have Quadro workstations and at home the employee has a Titan. Sure it's not as good but it gets the job done until you get back to work.

    Please check your shit. Everyone saying R9 290X--and yes I agree for gaming it's got some real good price/performance--destroys the Titan is ignorant and needs to do some good long research into:
    A. How well the Titan sold
    B. The size of the compute market and MISSING PRICE POINTS in said market.
    C. The amount of people doing compute who are also avid gamers.
  • chimaxi83 - Thursday, October 24, 2013 - link

    Impressive. This cards beats Nvidia on EVERY level! Price, performance, features, power..... every level. Nvidia paid the price for gouging it's customers, they are going to lose a ton of marketshare. I doubt they have anything to match this for at least a year.
  • Berzerker7 - Thursday, October 24, 2013 - link

    Sounds like a bot. The card is worse than a Titan on every point except high resolution (read: 4K), including power, temperature and noise.
  • testbug00 - Thursday, October 24, 2013 - link

    Er, the Titan beats it on being higher priced, looking nicer, having a better cooler and using less power.

    even in 1080p a 290x approxs ties (slightly ahead according to techpowerup (4%)) the Titan.

    Well, a $550 card that can tie a $1000 card in a resolution a card that fast really shouldn't be bought for (seriously, if you are playing in 1200p or less there is no reason to buy any GPU over $400 unless you plan to ugprade screens soon)
  • Sancus - Thursday, October 24, 2013 - link

    The Titan was a $1000 card when it was released.... 8 months ago. So for 8 months nvidia has had the fastest card and been able to sell it at a ridiculous price premium(even at $1000, supply of Titans was quite limited, so it's not like they would have somehow benefited from setting the price lower... in fact Titan would probably have made more money for Nvidia at an even HIGHER price).

    The fact that ATI is just barely matching Nvidia at regular resolutions and slightly beating them at 4k, 8 months later, is a baseline EXPECTATION. It's hardly an achievement. If they had released anything less than the 290X they would have completely embarrassed themselves.

    And I should point out that they're heavily marketing 4k resolution for this card and yet frame pacing in Crossfire even with their 'fixes' is still pretty terrible, and if you are seriously planning to game at 4k you need Crossfire to be actually usable, which it has never really been.
  • anubis44 - Thursday, October 24, 2013 - link

    The margin of victory for the R9 290X over the Titan at 4K resolutions is not 'slight', it's substantial. HardOCP says it's 10-15% faster on average. That's a $550 card that's 10-15% faster than a $1000 card.

    What was that about AMD being embarassed?
  • Sancus - Thursday, October 24, 2013 - link

    By the time more than 1% of the people buying this card even have 4k monitors 20nm cards will have been on sale for months. Not only that but you would basically go deaf next to a Crossfire 290x setup which is what you need for 4k. And anyway, the 290x is faster only because it's been monstrously over clocked beyond the ability of its heatsink to cool it properly. 780/Titan are still far more viable 2/3/4 GPU cards because of their superior noise and power consumption.

    All 780s overclock to considerably faster than this card at ALL resolutions so the gtx 780ti is probably just an OCed 780, and it will outperform the 290x while still being 10db quieter.
  • DMCalloway - Thursday, October 24, 2013 - link

    You mention monstrously OC'ing the 290x yet have no problem OC'ing the 780 in order to create a 780ti. Everyone knows that aftermarket coolers will keep the noise and temps. in check when released. Let's deal with the here and now, not speculate on future cards. Face it; AMD at least matches or beats a card costing $100 more which will cause Nvidia to launch the 780ti at less than current 780 prices.
  • Sancus - Thursday, October 24, 2013 - link

    You don't understand how pricing works. AMD is 8 months late to the game. They've released a card that is basically the GTX Titan, except it uses more than 50W more power and has a bargain basement heatsink. That's why it's $100 cheaper. Because AMD is the one who are far behind and the only way for them to compete is on price. They demonstrably can't compete purely based on performance, if the 290X was WAY better than the GTX Titan, AMD would have priced it higher because guess what, AMD needs to make a profit too -- and they consistently have lost money for years now.

    The company that completely owned the market to the point they could charge $1000 for a video card are the winners here, not the one that arrived out of breath at the finish line 8 months later.

    I would love for AMD to be competitive *at a competitive time* so that we didn't have to pay $650 for a GTX 780, but the fact of the matter is that they're simply not.

Log in

Don't have an account? Sign up now