Overclocking

Finally, let’s spend a bit of time looking at the overclocking prospects for the 290. Without any voltage adjustment capabilities and with AMD binning chips for clockspeeds and power consumption we’re not necessarily expecting a lot of headroom here, but none the less it’s worth checking out to see how much more we can squeeze out of the card.

Even though we’re officially limited to AMD’s Overdrive utility for the moment for overclocking, Overdrive offers a wide enough range of values that we shouldn’t have any problem maxing out the card. In fact we’ll be limited by the card first.

Radeon R9 290 Overclocking
  Reference Radeon R9 290
Shipping Core Clock 662MHz
Shipping Boost Clock 947MHz
Shipping Memory Clock 5GHz
Shipping Boost Voltage ~1.18v
   
Overclock Core Clock 790MHz
Overclock Boost Clock 1075MHz
Overclock Memory Clock 5.6GHz
Overclock Max Boost Voltage ~1.18v

Despite the lack of voltage control, when it comes to overclocking the 290 we were able to achieve solid overclocks on both the GPU and the memory. On a boost clock basis we were able to push the 290 from 947MHz to 1075MHz, an increase of 128MHz (14%). Meanwhile we were able to push the memory from 5GHz to 5.6GHz before artifacting set in, representing a 600MHz (12%) memory overclock.  Being able to increase both clockspeeds to such a similar degree means that no matter what the video bottleneck is – be it GPU or memory – we should see some kind of performance increase out of overclocking.

On a side note, for overclocking the 290 we stuck with moderate increases to both the maximum fan speed and the PowerTune limit. In the case of the former we used a 65% maximum fan speed (which actually proved to be more than what’s necessary), while for the latter we went with a 20% increase in the PowerTune limit, as at this point in time we don’t have a good idea for what the safe power limits are for the reference 290/290X board. Though in either case only FurMark could push the overclocked card to its power limit, and nothing could push the card to its fan speed limit. Similarly we didn’t encounter any throttling issues with our overclocked settings, with every game (including CoH2) running at 1075MHz sustained.

Taking a brief look at power, temp, and noise before jumping into our gaming performance results, we can see that overclocking the card has a measurable impact on power consumption under both Crysis 3 and FurMark. With Crysis 3 we’re clockspeed limited before we’re power limited, leading to an increase in power consumption of 27W, while under FurMark where we were power limited it’s a much more academic increase of 87W.

Since the 290 already ships at the highest temperate limit it allows – 95C – our sustained temperatures are unchanged even after overclocking.

The 290 is already an unreasonably loud card at stock, and unfortunately the fan speed increases needed to handle the greater heat load from overclocking only make this worse. Under Crysis 3 we peaked at 59.7dB, or 49% fan speed. While under FurMark we peaked at 65.3dB, or 59% fan speed. For these noise levels to be bearable the 290 really needs to be fully isolated (e.g. in another room) or put under water, as otherwise 59.7dB sustained is immensely loud for a video card.

Finally getting to the matter of game performance, we’re seeing consistently strong scaling across every game in our collection. The specific performance increase depends on the game as always, but a 14% core overclock and 12% memory overclock has netted us anywhere between 9% in Metro up to the full 14% in Total War: Rome II. At this performance level the 290 OC exceeds the performance of any other single-GPU card at stock, and comes very close to delivering 60fps in every action game in our benchmark suite.

Power, Temperature, & Noise Final Words
Comments Locked

295 Comments

View All Comments

  • TempAccount007 - Saturday, November 9, 2013 - link

    What part of REFERENCE COOLER do you not understand?
  • johnny_boy - Wednesday, November 13, 2013 - link

    The IF isn't so big, I think. A lot of gamers already have blocks for their graphics cards, or don't care much about the additional noise, or want a block anyway at some point and the 290 presents an opportunity to get one now (and then cooling is quieter/better than the competing nVidia cards for the same price when figuring in the watercooling costs for the AMD card). I'd rather get the 290 (over the 780) and use my current watercooling solution. If I didn't have watercooling then I'd still rather buy the 290 and upgrade to watercooling.
  • Eniout - Thursday, November 14, 2013 - link

    my Aunty Julia recently got Jeep Compass SUV by working from a
    computer. read the full info here www.Jobs37.coℳ
  • tgirgis - Thursday, February 20, 2014 - link

    That's really extremely one sided, first of all, AMD already has a response to G-Sync, (their version for now has been dubbed "Free-Sync" but no idea if that nomenclature is final) and they have TressFX (which, at the moment, does look better than Nvidia's "Hairworks" but Nvidia will probably soon catch up), and they've got Mantle, which is definitely a massive advantage.

    Not to mention the R9 290 comes with 4GB Vram, as opposed to the GTX 780's 3GB, though it's really not a huge issue except in 4k gaming. Finally, shield compatibility isn't really a benefit, it's a $250 handheld game system, it's only beneficial if you interested in purchasing one of those, as opposed to being an included feature.

    Nvidia is not without it's advantages however, they still have lower power consumption and thermals which is great for mini-itx systems (although manufacturer custom cooled cards can help bridge the gap for thermals) and they do still have Physx.

    If Mantle keeps going the way it is now, Nvidia might be forced to pay royalties to AMD similar to how they did with Intel a few years back. If anything, AMD should throw "Allow us to use Physx" in the negotiations :)
  • slickr - Tuesday, November 5, 2013 - link

    O yeah, Nvidia at this point has no choice, but to lower its prices again. I mean for $400 this card is amazing. It performs on the same level as the $1000 Titan and on the same level as the $550 290X, so a giant performance at a very cheap price.

    Even with the high noise(just wait 2 weeks for custom cooler) this card blows the GTX 780 out of the water, the performance is so much better.

    I think if Nvidia wants to stay in the competition they would need to cut the GTX 780 price to at least $400 as well and try and get sales due to better acoustics and a lower power consumption, but if it was just performance in question they would need to lower the price of the 780 to $350 or 300 euros.

    Of course that would mean that the 770 should get a price reduction as well and be around $270.
  • holdingitdown - Tuesday, November 5, 2013 - link

    Yes this card is incredibly disruptive. The performance makes the 780 look like a mess. Expect to see at least another $100 slashed off the 780 and the 770 needs a little more taken off.

    The R9 290 is a monster!
  • crispyitchy - Tuesday, November 5, 2013 - link

    Best card to release yet as far as I am concerned.

    The noise profile is not perfect, but every card is noisy once gaming to one degree or another.

    What is perfect is the giant performance for this perfect price.

    Newegg here I COME
  • Wreckage - Tuesday, November 5, 2013 - link

    I doubt NVIDIA will cut their price. This card is so loud that most people will stay away and get a 780 or 770. AMD is so desperate to increase performance that they sacrifice everything else. It's like the last sad days of 3DFX.
  • Da W - Tuesday, November 5, 2013 - link

    Remember what happened after 3Dfx died? Higher price and mediocre performance.
    I'd buy AMD if only to keep them alive and force Nvidia to drop their prices.
  • HisDivineOrder - Tuesday, November 5, 2013 - link

    Actually, traditionally, 3dfx was overpriced until the very end. ATI was always there competing with nVidia and 3dfx, anyway.

    So competition existed for as long as we've had discrete GPU's in any meaningful way. It's AMD that wants to end competition by standardizing PC gaming high performance around a GCN-based API only they can use meaningfully.

Log in

Don't have an account? Sign up now