To say it’s been a busy month for AMD is probably something of an understatement. After hosting a public GPU showcase in Hawaii just under a month ago, the company has already launched the first 5 cards in the Radeon 200 series – the 280X, 270X, 260X, 250, and 240 – and AMD isn’t done yet. Riding a wave of anticipation and saving the best for last, today AMD is finally launching the Big Kahuna: the Radeon R9 290X.

The 290X is not only the fastest card in AMD’s 200 series lineup, but the 290 series in particular also contains the only new GPU in AMD’s latest generation of video cards. Dubbed Hawaii, with the 290 series AMD is looking to have their second wind between manufacturing node launches. By taking what they learned from Tahiti and building a refined GPU against a much more mature 28nm process – something that also opens the door to a less conservative design – AMD has been able to build a bigger, better Tahiti that continues down the path laid out by their Graphics Core Next architecture while bringing some new features to the family.

Bigger and better isn’t just a figure of speech, either. The GPU really is bigger, and the performance is unquestionably better. After vying with NVIDIA for the GPU performance crown for the better part of a year, AMD fell out of the running for it earlier this year after the release of NVIDIA’s GK110 powered GTX Titan, and now AMD wants that crown back.

AMD GPU Specification Comparison
  AMD Radeon R9 290X AMD Radeon R9 280X AMD Radeon HD 7970 AMD Radeon HD 6970
Stream Processors 2816 2048 2048 1536
Texture Units 176 128 128 96
ROPs 64 32 32 32
Core Clock 727MHz? 850MHz 925MHz 880MHz
Boost Clock 1000MHz 1000MHz N/A N/A
Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 5.5GHz GDDR5
Memory Bus Width 512-bit 384-bit 384-bit 256-bit
VRAM 4GB 3GB 3GB 2GB
FP64 1/8 1/4 1/4 1/4
TrueAudio Y N N N
Transistor Count 6.2B 4.31B 4.31B 2.64B
Typical Board Power ~300W (Unofficial) 250W 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Architecture GCN 1.1 GCN 1.0 GCN 1.0 VLIW4
GPU Hawaii Tahiti Tahiti Cayman
Launch Date 10/24/13 10/11/13 12/28/11 12/15/10
Launch Price $549 $299 $549 $369

We’ll dive into the full architectural details of Hawaii a bit later, but as usual let’s open up with a quick look at the specs of today’s card. Hawaii is a GCN 1.1 part – the second such part from AMD – and because of that comparisons with older GCN parts are very straightforward. For gaming workloads in particular we’re looking at a GCN GPU with even more functional blocks than Tahiti and even more memory bandwidth to feed it, and 290X performs accordingly.

Compared to Tahiti, AMD has significantly bulked up both the front end and the back end of the GPU, doubling each of them. The front end now contains 4 geometry processor and rasterizer pairs, up from 2 geometry processors tied to 4 rasterizers on Tahiti, while on the back end we’re now looking at 64 ROPs versus Tahiti’s 32. Meanwhile in the computational core AMD has gone from 32 CUs to 44, increasing the amount of shading/texturing hardware by 38%.

On the other hand GPU clockspeeds on 290X are being held consistent versus the recently released 280X, with AMD shipping the card with a maximum boost clock of 1GHz (they’re unfortunately still not telling us the base GPU clockspeed), which means any significant performance gains will come from the larger number of functional units. With that in mind we’re looking at a video card that has 200% of 280X’s geometry/ROP performance and 138% of its shader/texturing performance. In the real world performance will trend closer to the increased shader/texturing performance – ROP/geometry bottlenecks don’t easily scale out like shading bottlenecks – so for most scenarios the upper bound for performance increases is that 38%.

Meanwhile the job of feeding Hawaii comes down to AMD’s fastest memory bus to date. With 280X and other Tahiti cards already shipping with a 384-bit memory bus running at 6GHz – and consuming quite a bit of die space to get there – to increase their available memory bandwidth AMD has opted to rebalance their memory configuration in favor of a wider, lower clockspeed memory bus. For Hawaii we’re looking at a 512-bit memory bus paired up with 5GHz GDDR5, which brings the total amount of memory bandwidth to 320GB/sec. The reduced clockspeed means that AMD’s total memory bandwidth gains aren’t quite as large as the increase in the memory bus size itself, but compared to the 288GB/sec on 280X this is still an 11% increase in memory bandwidth and a move very much needed to feed the larger number of ROPs that come with Hawaii. More interesting however is that in spite of the larger memory bus the total size of AMD’s memory interface has gone down compared to Tahiti, and we’ll see why in a bit.

At the same time because AMD’s memory interface is so compact they’ve been able to move to a 512-bit memory bus without requiring too large a GPU. At 438mm2 and composed of 6.2B transistors Hawaii is still the largest GPU ever produced by AMD – 18mm2 bigger than R600 (HD 2900) – but compared to the 365mm2, 4.31B transistor Tahiti AMD has been able to pack in a larger memory bus and a much larger number of functional units into the GPU for only a 73mm2 (20%) increase in die size. The end result being that AMD is able to once again significantly improve their efficiency on a die size basis while remaining on the same process node. AMD is no stranger to producing these highly optimized second wind designs, having done something similar for the 40nm era with Cayman (HD 6900), and as with Cayman the payoff is the ability to increase performance an efficiency between new manufacturing nodes, something that will become increasingly important for GPU manufacturers as the rate of fab improvements continues to slow.

Moving on, let’s quickly talk about power consumption. With Hawaii AMD has made a number of smaller changes both to the power consumption of the silicon itself, and how it is defined. On the tech side of matters AMD has been able to reduce transistor leakage compared to Tahiti, directly reducing power consumption of the GPU as a result, and this is being paired with changes to certain aspects of their power management system, with implementing advanced power/performance management abilities that vastly improve the granularity of their power states (more on this later).

However at the same time how power consumption is being defined is getting far murkier: AMD doesn’t list the power consumption of the 290X in any of their documentation or specifications, and after asking them directly we’re only being told that the “average gaming scenario power” is 250W. We’ll dive into this more when we do a breakdown of the changes to PowerTune on 290X, but in short AMD is likely underreporting the 290X’s power consumption. Based on our test results we’re seeing 290X draw more power than any other “250W” card in our collection, and in reality the TDP of the card is almost certainly closer to 300W. There are limits to how long the card can sustain that level of power draw due to cooling requirements, but given sufficient cooling the power limit of the card appears to be around 300W, and for the moment we’re labeling it as such.


Left To Right: 6970, 7970, 290X

Finally, let’s talk about pricing, availability, and product positioning. As AMD already launched the rest of the 200 series 2 weeks ago, the launch of the 290X is primarily filling out the opening at the top of AMD’s product lineup that the rest of the 200 series created. The 7000 series is in the middle of its phase out – and the 7990 can’t be too much farther behind – so the 290X is quickly going to become AMD’s de-facto top tier card.

The price AMD will be charging for this top tier is $549, which happens to be the same price as the 7970 when it launched in 2012. This is about $100-$150 more expensive than the outgoing 7970GE and $250 more expensive than 280X, with the 290X offering an average performance increase over 280X of 30%. Meanwhile when placed against NVIDIA’s lineup the primary competition for 290X will be the $650 GeForce GTX 780, a card that the 290X can consistently beat, making AMD the immediate value proposition at the high-end. At the same time however NVIDIA will have their 3 game Holiday GeForce Bundle starting on the 28th, making this an interesting inversion of earlier this year where it was AMD offering large game bundles to improve the competitive positioning of their products versus NVIDIA’s. As always, the value of bundles are ultimately up to the buyer, especially in this case since we’re looking at a rather significant $100 price gap between the 290X and the GTX 780.

Finally, unlike the 280X this is going to be a very hard launch. As part of their promotional activities for the 290X retailers have already been listing the cards while other retailers have been taking pre-orders, and cards will officially go on sale tomorrow. Note that this is a full reference launch, so everyone will be shipping identical reference cards for the time being. Customized cards, including the inevitable open air cooled ones, will come later.

Fall 2013 GPU Pricing Comparison
AMD Price NVIDIA
  $650 GeForce GTX 780
Radeon R9 290X $550  
  $400 GeForce GTX 770
Radeon R9 280X $300  
  $250 GeForce GTX 760
Radeon R9 270X $200  
  $180 GeForce GTX 660
  $150 GeForce GTX 650 Ti Boost
Radeon R7 260X $140  

 

A Bit More On Graphics Core Next 1.1
Comments Locked

396 Comments

View All Comments

  • TheJian - Friday, October 25, 2013 - link

    LOL. Tell that to both their bottom lines. I see AMD making nothing while NV profits. People who bought titan got a $2500 Tesla for $1000. You don't buy a titan just to game (pretty dumb if you did) as it's for pro apps too (the compute part of the deal). It's a steal for gamers who make money on the card too. Saving $1500 is a great deal. So since you're hating on NV pricing, how do you feel about the 7990 at $1000. Nice to leave that out of your comment fanboy ;) Will AMD now reap what they sow and have to deal with all the angry people who bought those? ROFL. Is the 1K business model unsustainable for AMD too? Even the 6990 came in at $700 a ways back. Dual or single chip the $1000 price is alive and well from either side for those who want it.

    I'd bet money a titan ultra will be $1000 again shortly if they even bother as it's not a pure gamer card but much more already. If you fire up pro apps with Cuda you'll smoke that 290x daily (which covers just about all pro apps). Let me know when AMD makes money in a quarter that NVDA loses money. Then you can say NV pricing is biting them in the A$$. Until then, your comment is ridiculous. Don't forget even as ryan points out in this article (and he don't love NV...LOL), AMD still has driver problems (and has for ages) but he believes in AMD much like fools do in Obama still...LOL. For me, even as an 5850 owner, they have to PROVE themselves before I ponder another card from them at 20nm. The 290x is hot, noisy and uses far more watts and currently isn't coming with 3 AAA games either. NV isn't shaking in their boots. I'll be shocked if 780TI isn't $600 or above as it should match Titan which 290x doesn't do even with the heat, noise and watts.

    And you're correct no OC room. Nobody has hit above 1125.

    If NV was greedy, wouldn't they be making MORE money than in 2007? They haven't cracked 850mil in 5 years. Meanwhile, AMD's pricing which you seem to love, has cause their entire business to basically fail (no land, no fabs, gave up cpu race, 8 months to catch up with a hot noisy chip etc). They have lost over $6B in the last 10yrs. AMD has idiots managing their company and they are destroying what used to be a GREAT company with GREAT products. They should have priced this card $100 higher and all other cards rebadged should be $50 higher. They might make some actual money then every quarter right? Single digit margins on console chips (probably until 20nm shrink) won't get you rich either. Who made that deal? FIRE THAT GUY. That margin is why NV said it wasn't worth it.
  • chizow - Saturday, October 26, 2013 - link

    AMD's non-profitability goes far beyond their GPU business, it's more due to their CPU business. People who got Titan didn't get a Tesla for $1000, they got a Tesla without ECC. Compute apps without ECC would be like second-guessing every result because you're unsure whether a value was stored/retrieved from memory correctly. Regard to 7990 pricing, you can surely look it up before pulling the fanboy card, just as you can look up my comments on 7970 launch pricing. And yes, AMD will absolutely have to deal with that backlash giving that card dropped even more precipitously than even Titan, going from $1K to $600 in only 4-5 months.

    I don't think Nvidia will make the same mistake with a Titan Ultra at $1K. I also don't think Nvidia fans who only bought Titan for gaming will fall for the same mistake 2x. If Maxwell comes out and Nvidia holds out on the big ASIC, I doubt anyone disinterested in compute will fall for the same trick if Nvidia launches a Titan 2 at $1K using a compute gimmick to justify the price. They will just point to Titan and say "wait 3 months and they'll release something that's 95% of it's performance at 65% of it's price". As they say, Fool me once, shame on you, fool me twice, shame on me.

    And no, greed and profit don't go hand in hand. In 2007-2008, Nvidia posted record profits and revenue for multiple consecutive quarters as you stated on the back of a cheap $230-$270 8800GT. With Titan, they reversed course by setting record margins, but on reduced revenue and profits. They basically covet Intel's huge profit margins, but they clearly lack the revenue to grow their bottomline. Selling $1K GPUs certainly isn't going to get them there any faster.
  • FragKrag - Thursday, October 24, 2013 - link

    great performance, but I'll wait until I see some better thermals/noise from aftermarket coolers :p
  • Shark321 - Thursday, October 24, 2013 - link

    As with the Titan in the beginning, no alternate coolers will be available for the time being (according to computerbase). This means even if the price is great, you will be stuck with a very noisy and hot card. 780Ti will outperform the 290x in 3 weeks. It remains to bee seen how it will be priced (I guess $599).
  • The Von Matrices - Thursday, October 24, 2013 - link

    This is the next GTX 480 or HD 2900 XT. It provides great performance for the price, that is if you can put up with the heat and noise.
  • KaosFaction - Thursday, October 24, 2013 - link

    Work in Progress!!! Whhhhaaaattttt I want answers now!!
  • masterpine - Thursday, October 24, 2013 - link

    Good to see something from AMD challenging the GK110's, I still find it fairly remarkable that in the fast moving world of GPU's it's taken 12 months for AMD to build something to compete. Hopefully this puts a swift end to the above $600 prices in the single GPU high end.

    More than a little concerned at the 95C target temp of these things. 80C is toasty enough already for the GTX780, actually had to point a small fan at the DVI cables coming out the back of my 780 SLI surround setup because the heat coming out the back of them was causing dramas. Not sure i could cope with the noise of a 290X either.

    Anyhow, this is great for consumers. Hope to see some aftermarket coolers reign these things in a bit. If the end result is both AMD and Nvidia playing hard-ball at the $500 mark in a few weeks time, we all win.
  • valkyrie743 - Thursday, October 24, 2013 - link

    HOLY TEMPS BATMAN. its the new gtx 480 in the temp's department
  • kallogan - Thursday, October 24, 2013 - link

    No overclocking headroom with stock cooler. That's for sure.
  • FuriousPop - Thursday, October 24, 2013 - link

    can we please see 2x in CF mode with eyefinity!? or am i asking for too much?

    also, Nvidia will always be better for those of you in the 30% department of having only max 1080p. for the rest of us in 1440p and 1600p and beyond (eyefinity) then AMD will be as stated by previous comments in this thread "King of the hill"....

    but none the less, some more testing in the CF+3x monitor department would be great to see how far this puppy really goes...

    i mean seriously whats the point of putting a 80 year old man behind the wheel of the worlds fastest car?!? please push the specs on gaming benchmarks pls (eg; higher res)

Log in

Don't have an account? Sign up now