To say it’s been a busy month for AMD is probably something of an understatement. After hosting a public GPU showcase in Hawaii just under a month ago, the company has already launched the first 5 cards in the Radeon 200 series – the 280X, 270X, 260X, 250, and 240 – and AMD isn’t done yet. Riding a wave of anticipation and saving the best for last, today AMD is finally launching the Big Kahuna: the Radeon R9 290X.

The 290X is not only the fastest card in AMD’s 200 series lineup, but the 290 series in particular also contains the only new GPU in AMD’s latest generation of video cards. Dubbed Hawaii, with the 290 series AMD is looking to have their second wind between manufacturing node launches. By taking what they learned from Tahiti and building a refined GPU against a much more mature 28nm process – something that also opens the door to a less conservative design – AMD has been able to build a bigger, better Tahiti that continues down the path laid out by their Graphics Core Next architecture while bringing some new features to the family.

Bigger and better isn’t just a figure of speech, either. The GPU really is bigger, and the performance is unquestionably better. After vying with NVIDIA for the GPU performance crown for the better part of a year, AMD fell out of the running for it earlier this year after the release of NVIDIA’s GK110 powered GTX Titan, and now AMD wants that crown back.

AMD GPU Specification Comparison
  AMD Radeon R9 290X AMD Radeon R9 280X AMD Radeon HD 7970 AMD Radeon HD 6970
Stream Processors 2816 2048 2048 1536
Texture Units 176 128 128 96
ROPs 64 32 32 32
Core Clock 727MHz? 850MHz 925MHz 880MHz
Boost Clock 1000MHz 1000MHz N/A N/A
Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 5.5GHz GDDR5
Memory Bus Width 512-bit 384-bit 384-bit 256-bit
VRAM 4GB 3GB 3GB 2GB
FP64 1/8 1/4 1/4 1/4
TrueAudio Y N N N
Transistor Count 6.2B 4.31B 4.31B 2.64B
Typical Board Power ~300W (Unofficial) 250W 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Architecture GCN 1.1 GCN 1.0 GCN 1.0 VLIW4
GPU Hawaii Tahiti Tahiti Cayman
Launch Date 10/24/13 10/11/13 12/28/11 12/15/10
Launch Price $549 $299 $549 $369

We’ll dive into the full architectural details of Hawaii a bit later, but as usual let’s open up with a quick look at the specs of today’s card. Hawaii is a GCN 1.1 part – the second such part from AMD – and because of that comparisons with older GCN parts are very straightforward. For gaming workloads in particular we’re looking at a GCN GPU with even more functional blocks than Tahiti and even more memory bandwidth to feed it, and 290X performs accordingly.

Compared to Tahiti, AMD has significantly bulked up both the front end and the back end of the GPU, doubling each of them. The front end now contains 4 geometry processor and rasterizer pairs, up from 2 geometry processors tied to 4 rasterizers on Tahiti, while on the back end we’re now looking at 64 ROPs versus Tahiti’s 32. Meanwhile in the computational core AMD has gone from 32 CUs to 44, increasing the amount of shading/texturing hardware by 38%.

On the other hand GPU clockspeeds on 290X are being held consistent versus the recently released 280X, with AMD shipping the card with a maximum boost clock of 1GHz (they’re unfortunately still not telling us the base GPU clockspeed), which means any significant performance gains will come from the larger number of functional units. With that in mind we’re looking at a video card that has 200% of 280X’s geometry/ROP performance and 138% of its shader/texturing performance. In the real world performance will trend closer to the increased shader/texturing performance – ROP/geometry bottlenecks don’t easily scale out like shading bottlenecks – so for most scenarios the upper bound for performance increases is that 38%.

Meanwhile the job of feeding Hawaii comes down to AMD’s fastest memory bus to date. With 280X and other Tahiti cards already shipping with a 384-bit memory bus running at 6GHz – and consuming quite a bit of die space to get there – to increase their available memory bandwidth AMD has opted to rebalance their memory configuration in favor of a wider, lower clockspeed memory bus. For Hawaii we’re looking at a 512-bit memory bus paired up with 5GHz GDDR5, which brings the total amount of memory bandwidth to 320GB/sec. The reduced clockspeed means that AMD’s total memory bandwidth gains aren’t quite as large as the increase in the memory bus size itself, but compared to the 288GB/sec on 280X this is still an 11% increase in memory bandwidth and a move very much needed to feed the larger number of ROPs that come with Hawaii. More interesting however is that in spite of the larger memory bus the total size of AMD’s memory interface has gone down compared to Tahiti, and we’ll see why in a bit.

At the same time because AMD’s memory interface is so compact they’ve been able to move to a 512-bit memory bus without requiring too large a GPU. At 438mm2 and composed of 6.2B transistors Hawaii is still the largest GPU ever produced by AMD – 18mm2 bigger than R600 (HD 2900) – but compared to the 365mm2, 4.31B transistor Tahiti AMD has been able to pack in a larger memory bus and a much larger number of functional units into the GPU for only a 73mm2 (20%) increase in die size. The end result being that AMD is able to once again significantly improve their efficiency on a die size basis while remaining on the same process node. AMD is no stranger to producing these highly optimized second wind designs, having done something similar for the 40nm era with Cayman (HD 6900), and as with Cayman the payoff is the ability to increase performance an efficiency between new manufacturing nodes, something that will become increasingly important for GPU manufacturers as the rate of fab improvements continues to slow.

Moving on, let’s quickly talk about power consumption. With Hawaii AMD has made a number of smaller changes both to the power consumption of the silicon itself, and how it is defined. On the tech side of matters AMD has been able to reduce transistor leakage compared to Tahiti, directly reducing power consumption of the GPU as a result, and this is being paired with changes to certain aspects of their power management system, with implementing advanced power/performance management abilities that vastly improve the granularity of their power states (more on this later).

However at the same time how power consumption is being defined is getting far murkier: AMD doesn’t list the power consumption of the 290X in any of their documentation or specifications, and after asking them directly we’re only being told that the “average gaming scenario power” is 250W. We’ll dive into this more when we do a breakdown of the changes to PowerTune on 290X, but in short AMD is likely underreporting the 290X’s power consumption. Based on our test results we’re seeing 290X draw more power than any other “250W” card in our collection, and in reality the TDP of the card is almost certainly closer to 300W. There are limits to how long the card can sustain that level of power draw due to cooling requirements, but given sufficient cooling the power limit of the card appears to be around 300W, and for the moment we’re labeling it as such.


Left To Right: 6970, 7970, 290X

Finally, let’s talk about pricing, availability, and product positioning. As AMD already launched the rest of the 200 series 2 weeks ago, the launch of the 290X is primarily filling out the opening at the top of AMD’s product lineup that the rest of the 200 series created. The 7000 series is in the middle of its phase out – and the 7990 can’t be too much farther behind – so the 290X is quickly going to become AMD’s de-facto top tier card.

The price AMD will be charging for this top tier is $549, which happens to be the same price as the 7970 when it launched in 2012. This is about $100-$150 more expensive than the outgoing 7970GE and $250 more expensive than 280X, with the 290X offering an average performance increase over 280X of 30%. Meanwhile when placed against NVIDIA’s lineup the primary competition for 290X will be the $650 GeForce GTX 780, a card that the 290X can consistently beat, making AMD the immediate value proposition at the high-end. At the same time however NVIDIA will have their 3 game Holiday GeForce Bundle starting on the 28th, making this an interesting inversion of earlier this year where it was AMD offering large game bundles to improve the competitive positioning of their products versus NVIDIA’s. As always, the value of bundles are ultimately up to the buyer, especially in this case since we’re looking at a rather significant $100 price gap between the 290X and the GTX 780.

Finally, unlike the 280X this is going to be a very hard launch. As part of their promotional activities for the 290X retailers have already been listing the cards while other retailers have been taking pre-orders, and cards will officially go on sale tomorrow. Note that this is a full reference launch, so everyone will be shipping identical reference cards for the time being. Customized cards, including the inevitable open air cooled ones, will come later.

Fall 2013 GPU Pricing Comparison
AMD Price NVIDIA
  $650 GeForce GTX 780
Radeon R9 290X $550  
  $400 GeForce GTX 770
Radeon R9 280X $300  
  $250 GeForce GTX 760
Radeon R9 270X $200  
  $180 GeForce GTX 660
  $150 GeForce GTX 650 Ti Boost
Radeon R7 260X $140  

 

A Bit More On Graphics Core Next 1.1
Comments Locked

396 Comments

View All Comments

  • SolMiester - Monday, October 28, 2013 - link

    So you can OC a 780 on stock, but not the 290x to sustain the OC, which means 780 wins!, especially after the price drop to $500!, oh dear AMD 290x just went from hero to zero...
  • TheJian - Friday, October 25, 2013 - link

    I gave links and named the games previously...See my post. At 1080p 780 trades blows depending on the games. Considering 98.75% of us are 1920x1200 or less, that is important and you get 3 AAA games with 780, on top of the fact that it's using far less watts, less noise and less heat. A simple drop in price of $50-100 and 780 seems like a no brainer to me (disregarding the 780TI which should keep the same price as now I'd guess). Granted Titan needs a dunk in price now too, which I'm sure will come or they'll just replace it with a full SMX up-clocked titan to keep that price. I'm guessing old titan just died as 780TI will likely beat it in nearly everything if the rumored clock speed and extra smx are true. They will have to release a new titan ULTRA or something with another smx or up the mhz to 1ghz or something. OR hopefully BOTH.

    I'm guessing it's easier to just up the 100mhz or put it to 1ghz as surely manufacturing has gotten them to where all will do this now, more than having all SMX's defect free. Then again if you have a bad SMX just turn a few more off and it's a 780TI anyway. They've had 8 months to either pile up cherry picked ones, or just improve totally anyway so more can do this easily. Clearly 780ti was just waiting in the wings already. They were just waiting to see 290x perf and estimates.
  • eddieveenstra - Sunday, October 27, 2013 - link

    Titan died when 780gtx entered the room at 600 Euro. I'm betting Nvidia only brings a 780gtx ti and that's it. Titan goes EOL.
  • anubis44 - Thursday, October 24, 2013 - link

    This is the reference card. It's not loud unless you set it to 'Uber' mode, and even then, HardOCP thought the max fan speed should be set to 100% rather than 55%. Imagine how quiet an Asus Direct CUIII or Gigabyte Windforce or Sapphire Toxic custom cooled R9 290x will be.

    Crossfire and frame pacing all working, and R9 290X crushes Titan in 4K gaming (read HardOCP's review of this 4K section), all while costing $100 less than GTX780, and the R9 280X (7970) is priced at $299, and the R9 270X (7870) is now going for $180, and now Mantle API could be the next 3dfx Glide, and boost all 7000-series cards and higher dramatically for free...

    It's like AMD just pulled out a light sabre and cut nVidia right in half while Jsen Hsun just stares dumbly at them in disbelief. He should have merged nVidia with AMD when he had the chance. Could be too late now.
  • Shark321 - Thursday, October 24, 2013 - link

    There will be no custom cooling solution for the time being. It's the loudest card ever released. Twice as loud as 780/Titan in BF3 after 10 minutes of playing. Also Nvidia will bringt the 780Ti in 3 weeks, a faster cart at a comparable price, but quiet. AMD releases the 290x one year after NVidia, 2 years after NVidias tipeout. Nvidia will be able to counter this with a wink.
  • just4U - Thursday, October 24, 2013 - link

    Shark Writes: "It's the loudest card ever released."

    Guess you weren't around for the Geforce5...
  • HisDivineOrder - Thursday, October 24, 2013 - link

    The FX5800 is not ever dead. Not if we remember the shrill sound of its fans...

    ...or if the sound burned itself into our brains for all time.
  • Samus - Friday, October 25, 2013 - link

    I think the 65nm GeForce 280 takes the cake for loudest card ever made. It was the first card with a blower.
  • ninjaquick - Thursday, October 24, 2013 - link

    lol, the Ti can only do so much, there is no smaller node for either company to jump to, not until March for enough shipments to have stock for sales. The 290X just proves AMD's GCN design is a keeper. It is getting massively throttled by heat and still manages to pull a slight lead over the titan, at sometimes 15% lower clocks than reference. AMD needed a brand for this release season, and they have it.

    Both Nvidia and AMD are jumping to the next node in 2014. Nvidia will not release Maxwell on the current node. And there is no other node they would invest in going to.
  • HisDivineOrder - Thursday, October 24, 2013 - link

    The Ti could theoretically open up all the disabled parts of the current GK110 part. Doing that, who knows what might happen? We've yet to see a fully enabled GK110. I suspect that might eat away some of the Titan's efficiency advantage, though.

Log in

Don't have an account? Sign up now