Sapphire Radeon R9 290 Tri-X OC Review: Our First Custom Cooled 290
by Ryan Smith on December 24, 2013 3:45 PM EST- Posted in
- GPUs
- AMD
- Radeon
- Sapphire
- Radeon 200
Overclocking
Our final evaluation criteria is overclocking. Since the 290 Tri-X OC is based on AMD’s reference board, the card has the same overclocking functionality and capabilities as any reference card. Which is to say that we have voltage control and monitoring, but the board itself is not particularly engineered for extreme overclocking. At the same time the nature of putting together a card like the 290 Tri-X OC means that Sapphire is doing some degree of binning,
Radeon R9 290 Overclocking | ||||
Sapphire Radeon R9 290 Tri-X OC | Ref. Radeon R9 290 | |||
Shipping Core Clock | 699MHz | 662MHz | ||
Shipping Boost Clock | 1000MHz | 947MHz | ||
Shipping Memory Clock | 5.2GHz | 5GHz | ||
Shipping Boost Voltage | ~1.18v | ~1.18v | ||
Overclock Core Clock | 824MHz | 790MHz | ||
Overclock Boost Clock | 1125MHz | 1075MHz | ||
Overclock Memory Clock | 6GHz | 5.6GHz | ||
Overclock Max Boost Voltage | ~1.23v | ~1.18v |
For overclocking the 290 Tri-X OC, we increased the PowerTune limit by 20% and the voltage by 50mV to what’s roughly 1.23v. Out of this we were able to get another 125MHz (13%) out of the GPU and 800MHz (15%) out of the VRAM, now topping out at 1125MHz for the GPU boost clock, and 6GHz for the VRAM. The final clockspeeds are better than our previous efforts at overclocking our reference 290 (which was prior to voltage control), although only moderately so.
Starting first with gaming performance, as a more extensive overclock relative to Sapphire’s factory overclock, the performance gains from our own overclocking have yielded very solid results, despite the fact that this isn’t explicitly an overclocking board. Between the 13% core overclock and 15% memory overclock, the average performance increase measures in at 12%, varying depending on whether a game is more bandwidth limited, GPU limited, or CPU limited.
At 12% faster the overclocked 290 Tri-X OC is fast enough to largely close the gap between it and the reference GeForce GTX 780 Ti. Otherwise it will consistently outscore the 290X in “uber” mode, even in spite of the pared down nature of the Hawaii GPUs used in 290 cards.
The power consumption penalty for overclocking is significant, but outside of the outright power limited FurMark, is not unreasonable. Under Crysis 3 the additional 13% in performance comes at a cost of 30W at the wall, most of which will be from the video card.
Along with power consumption overclocking also comes with the expected increase in operating temperatures. Under Crysis 3 this is an increase of 5C to 75C, while for FurMark (where the increase in power was greater) this is an additional 7C to 81C. FurMark definitely gives the Tri-X cooler a harder time when the video card is overclocked, but as this is a worst case scenario it also means that operating temperatures should still remain well under 80C, as evidenced by Crysis 3.
Even with our overclocking the 290 Tri-X OC still holds up rather well when it comes to Crysis 3. At 45.6dB this is a 4.5dB increase in noise, more than enough to be noticed, but notably it’s still quieter than our stock 280X and 7970GE, both of which are similarly open air coolers. FurMark on the other hand pushes the card much harder since it’s reaching the new 120% PowerTime power limit, leading to a jump in noise of just under 10dB to 52.8dB. Even faced with an overclocked FurMark the Tri-X cooler is still holding up very well, though we’ve clearly reached the point (with regards to thermal loads) where it has lost its near-silence.
119 Comments
View All Comments
ShieTar - Tuesday, December 24, 2013 - link
"Curiously, the [idle] power consumption of the 290 Tri-X OC is notably lower than the reference 290."Well, it runs about 10°C cooler, and silicone does have a negative temperature coefficient of electrical resistance. That 10°C should lead to a resistance increase of a few %, and thus to a lower current of a few %. Here's some nice article about the same phenomenon observed going from a Stock 480 to an Zotac AMP! 480:
http://www.techpowerup.com/reviews/Zotac/GeForce_G...
The author over there was also initially very surprised. Apparently kids these days just don't pay attention in physics class anymore ...
EarthwormJim - Tuesday, December 24, 2013 - link
It's mainly the leakage current which decreases as temperature decreases, which can lead to the reductions in power consumption.Ryan Smith - Tuesday, December 24, 2013 - link
I had considered leakage, but that doesn't explain such a (relatively) massive difference. Hawaii is not a leaky chip, meanwhile if we take the difference at the wall to be entirely due to the GPU (after accounting for PSU efficiency), it's hard to buy that 10C of leakage alone is increasing idle power consumption by one-third.The Von Matrices - Wednesday, December 25, 2013 - link
In your 290 review you said that the release drivers had a power leak. Could this have been fixed and account for the difference?Samus - Wednesday, December 25, 2013 - link
Quality vrms and circuitry optimizations will have an impact on power consumption, too. Lots of factors here...madwolfa - Wednesday, December 25, 2013 - link
This card is based on reference design.RazberyBandit - Friday, December 27, 2013 - link
And based does not mean an exact copy -- it means similar. Some components (caps, chokes, resistors, etc.) could be upgraded and still fill the bill for the base design. Some components could even be downgraded, yet the card would still fit the definition of "based on AMD reference design."Khenglish - Wednesday, December 25, 2013 - link
Yes power draw does decrease with temperature, but not because resistance drops. Resistance dropping has zero effect on power draw. Why? Because processors are all about pushing current to charge and discharge wire and gate capacitance. Lower resistance just means that happens faster.The real reason power draw drops is due to lower leakage. Leakage current is completely unnecessary and is just wasted power.
Also an added tidbit. The reason performance increases while temperature decreases is mainly due to the wire resistance dropping, not an improvement in the transistor itself. Lower temperature decreases the number of carriers in a semiconductor but improves carrier mobility. There is a small net benefit to how much current the transistor can pass due to temperature's effect on silicon, but the main improvement is from the resistance of the copper interconnects dropping as temperature drops.
Totally - Wednesday, December 25, 2013 - link
Resistance increases with temperature -> Power draw increases P=(I^2)*R.ShieTar - Thursday, December 26, 2013 - link
The current isn't stabilized generally, the current is: P=U^2/R." Because processors are all about pushing current to charge and discharge wire and gate capacitance. Lower resistance just means that happens faster."
Basically correct, nevertheless capacitor charging happens asymptotic, and any IC optimised for speed will not wait for a "full" charge. The design baseline is probably to get the lowest charging required for operation at the highest qualified temperature. Since decreasing temperature will increase charging speed, as you pointed out, you will get to a higher charging ratio, and thus use more power.
On top of that, the GPU is not exclusively transistors. There is power electronics, there are interconnects, there are caches, and who knows what else (not me). Now when the transistors pull a little more charge due to the higher temperature, and the interconnects which deliver the current have a higher resistance, then you get additional transmission losses. And that's on top of higher leakage rates.
Of course the equation gets even more fun if you start considering the time constants of the interconnects itself, which have gotten quiet relevant since we got to 32nm structures, hence the high-K materials. Though I have honestly no clue how this contribution is linked to temperature.
But hey, here's hoping that Ryan will go and investigate the Power drop with his equipment and provide us with a full explanation. As I personally don't own a GPU which gets hot in idle (can't force the fan below 30% by software and won't stop it by hand) I cannot test idle power behavior on my own, but I can and did repeat the Furmark-Test described in the link above, and also see a power-saving of about 0.5W per °C with my GTX660. And thats based on internal power monitoring, so the mainboard/PCIe slot and the PSU should add a bit more to that:
https://www.dropbox.com/s/javq0dg75u40357/Screensh...