Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

For the 290X we’re going to be seeing several factors influencing the power, temperature, and noise characteristics of the resulting card. At the lowest level are those items beholden to the laws of physics: mainly, the fact that AMD has increased their die size by 20% while retaining the same manufacturing process, the same basic architecture, and the same boost clockspeeds. As a result there is nowhere for power consumption to go but up, even with leakage having been clamped down on versus 280X/Tahiti. The question of course being by how much, and is it worth the performance increase?

Meanwhile 290X also introduces the latest iteration of PowerTune, which significantly alters AMD’s power management strategy. Not only does AMD gain the ability to do fine grained clockspeed/voltage steps, and thereby improving their efficiency versus Tahiti, but alongside those improvements is the new PowerTune temperature and fan speed throttling model. AMD will of course need that as we’ll see, as they have equipped the 290X with a cooling solution almost identical to that of the 7970 despite the fact that TDP has been increased by roughly 50W, putting an even greater workload on the cooler to move all the heat Hawaii can produce.

Seeing as how we don’t have accurate voltage/VID readings at this time, we’ll jump right into clockspeeds. As we stated in our high level overview of the new PowerTune and the 290X, the 290X has two modes, quiet and uber. Both operate at the same clockspeeds and same power restrictions, but quiet mode utilizes a maximum fan speed of 40% while uber mode goes to 55%. The 15% difference conceals a roughly 1000rpm difference in the fan speed, so there are certainly good reasons for AMD to offer both, as uber mode can get very loud as we’ll see. At the same time however while quiet mode will be able to keep noise in check, it’s going to come up short on letting the 290X run at its full potential. In quiet mode throttling is inevitable; there’s simply not enough airflow to allow the 290X to sustain 1000MHz, as our clockspeed table below indicates.

Radeon R9 290X Average Clockspeeds
  Quiet (Default) Uber
Boost Clock 1000MHz 1000MHz
Metro: LL
923MHz
1000MHz
CoH2
970MHz
990MHz
Bioshock
985MHz
1000MHz
Battlefield 3
980MHz
1000MHz
Crysis 3
925MHz
1000MHz
Crysis: Warhead
910MHz
1000MHz
TW: Rome 2
907MHz
1000MHz
Hitman
990MHz
1000MHz
GRID 2
930MHz
1000MHz
Furmark
727MHz
870MHz

As we noted in our testing methodology section, these aren’t the lowest clockspeeds we’ve seen in those games but rather the average clockspeeds we hit in the final loop of our standard looped benchmark procedures. As such sustained performance can dip even lower, though by how much is going to of course depend on ambient temperatures and the cooling capabilities of the chassis itself. We believe our looping benchmarks run long enough to generally reach sustained performance numbers, but in all likelihood some of our numbers on the shortest benchmarks will skew low.

Anyhow, as we can see, in everything, even the shortest benchmark, the sustained clockspeeds are below 1000MHz. Out of all of our games Rome 2 fares the worst in this regard, dropping to 907MHz, while other games like Metro and Crysis aren’t far behind at 910MHz-930MHz. FurMark does one better yet and drops to 727MHz, which we believe to be 290X’s unlisted base clockspeed, indicating it has to drop out of boost mode entirely to bring performance/heat in check with cooling under quiet mode. 290X simply cannot sustain its peak boost clocks under quiet mode; there’s not enough cooling to handle the estimated 300W of heat 290X produces at those performance levels.

Which is why AMD has uber mode. In uber mode the fan speeds are high enough (if just so) to provide the cooling necessary to keep up with the 290X in every gaming workload. Only Company of Heroes 2 doesn’t do 1000MHz sustained, and while AMD’s utilities don’t provide all of the diagnostic data we’d like, we strongly suspect we’re TDP limited in CoH2 for a portion of the benchmark run, which is why we can’t sustain 1000MHz. In any case for most workloads uber mode should be enough to sustain the 290X’s best performance, though it’s not without a significant noise cost.

Consequently this is why we’re so dissatisfied with how AMD is publishing the specifications for the 290X. The lack of a meaningful TDP specification is bad enough, but given the video card’s out of the box (quiet mode performance) it’s disingenuous at best for the only published clockspeed number to be the boost clock. 290X simply cannot sustain 1000MHz in quiet mode under full load.

NVIDIA, when implementing GPU boost, had the sense to advertise not only the base clockspeed, but an “average” boost clock that in our experience underestimates the real clockspeeds they sustain. AMD on the other hand is advertising clockspeeds that by default cannot be sustained. And even Intel, by comparison, made sure to advertise both their base and boost GPU clockspeeds in the ARK and other specification sources, even with the vast gulf between them in some SKUs.

Given this, we find AMD’s current labeling practices troubling. Although seasoned buyers are going to turn to reviews like ours, where the value of a card will be clearly spelled out with respect to both performance and price, to only list the boost clock is being deceitful at best. AMD needs to list the base clockspeeds, and they’d be strongly advised to further list an average clockspeed similar to NVIIDA’s boost clock. Even those numbers won’t be perfect, but it will at least be a reasonable compromise over listing an “up to” number that is currently for all intents and purposes unreachable.

In any case, let’s finally get to the power, temperature, and noise data.

Idle power is not in AMD’s favor, and next to the Crossfire issues we were seeing in our gaming tests this appears to be another bug in their drivers. For what we know about GCN and Hawaii 88W at the wall is too high even after compensating for the additional memory and the larger GPU die. However if we plug in a 7970 on the Cat 13.11 beta v5 drivers and run the same power test, we find that power consumption rises about 6-8W at the wall versus Cat 13.11 beta v1. For reasons that we cannot fully determine, the v5 drivers are causing GCN setups to consume additional power at idle. This is not reflected in as a workload on the GPU nor the CPU so it’s not clear where the power leak is occurring (though temperature data points us to the GPU), but somewhere, somehow AMD has started unnecessarily burning power at idle.

We would fully expect that at some point AMD will be able to get this bug fixed, at which point idle power consumption (at the wall) for 290X should be in the low 80s range. But for the moment 88W is an accurate portrayal of 290X’s power consumption, making it several watts worse than GTX 780 at this time.

As a reminder, starting with the 290X we’ve switch from Metro: Last Light to Crysis 3 for our gaming power/temp/noise results, as Metro exhibits poor scaling on multi-GPU setups, leading to GPU utilization dropping well below 100%.

For this review Crysis 3 actually ends up working out very well as a gaming workload, due to the fact that the 290X and the GTX 780 (its closest competitor) achieve virtually identical framerates at around 52fps. As a result the power consumption from the rest of the system should be very similar, and the difference between the two in wall power should be almost entirely due to the video cards (after taking into account the usual 90% efficiency curve).

With that in mind, as we can see there’s no getting around the fact that compared to both 280X and GTX 780, power consumption has gone up. At 375W at the wall the 290X setup draws 48W more than the GTX 780, 29W more than GTX Titan, and even 32W more than the most power demanding Tahiti card, 7970GE. NVIDIA has demonstrated superior power efficiency throughout this generation and 290X, though an improvement itself, won’t be matching NVIDIA on this metric.

Overall our concern with power on high end cards has to do more with the ramifications of trying to remove/cool that additional heat than the power consumption itself – though summer does present its own problems – but still it’s clear that AMD’s 9% average performance advantage over the GTX 780 is going to come at a cost of more than a 9% increase in power consumption. Or versus the GTX Titan, which the 290X generally ties, 290X is still drawing more power. The fact that AMD is delivering better performance than a GTX 780 should not be understated, but neither should the fact that they consume more power while doing so.

FurMark, our pathological case, confirms what we were just seeing with Crysis. At this point 290X’s power consumption has fallen below GTX 780’s, but only because at this point we know that 290X has needed to significantly downclock itself to get to this point. GTX 780 throttles here too for the same reason, but not as much as 290X does. Consequently this puts the worst case power scenario for the GTX 780 at worse than the quiet mode 290X, but between this and Crysis the data suggests that the 290X is operating far closer to its limit than the GTX 780 (or GTX Titan) is.

Meanwhile we haven’t paid a lot of attention to the uber mode 290X until now, so now is a good time to do so. The 290X in uber mode still has to downclock for power reasons, but it stays in its boost state until the 290X in quiet mode. Based on this we believe the 290X uber is drawing near its peak power consumption for both FurMark and Crysis 3, which besides neatly illustrating the real world difference between quiet and uber modes in terms of how much heat they can move, means that we should be able to look at uber mode to get a good idea of what the 290X’s maximum power consumption is. To that end based on this data we believe the PowerTune/TDP limit for 290X is 300W, 50W higher than the “average gaming scenario power” they quote. This is also fairly consistent compared to the Tahiti based 7970 and its ilk, which have an official PowerTune limit of 250W.

Ultimately 300W single-GPU cards have been a rarity, and seemingly for good reason. That much heat is not easy to dissipate coming off of a single heat source (GPU), and the only other 300W cards we’ve seen are not cards with impressive acoustics. Given where AMD was with Tahiti we’re in no way surprised that power consumption has gone up with the larger GPU, but there are consequences to be had for having this much power going through a single card. Not the least of which is the fact that AMD’s reference cooler can’t actually move 300W of heat at reasonable noise levels, hence the use of quiet and uber modes.

Given the earlier idle power consumption numbers we were seeing, to see AMD’s idle temperatures run high is not unexpected. 43C isn’t a problem in and of itself, but it is indicative that the idle power leak is coming from the GPU rather than a CPU load from the drivers.

Given what we know about the new PowerTune and AMD’s design goals for 290X, the load temperatures are pretty much a given at this point. In quiet mode the 290X will hit 94C/95C and will eventually throttle under any game. We won’t completely go over the technical rationale for this (if you’ve missed our PowerTune page, please check that out first), but in short the temperatures we’re seeing, though surprising at first, are accounted for in AMD’s design. The Hawaii GPU should meet the necessary longevity targets even at 95C sustained, and static leakage should be low enough that it’s not causing a significant power consumption problem. It’s certainly a different way of thinking, but with a mature 28nm process and the very fast switching of PowerTune it’s also a completely practical design.

It’s still going to take some getting used to, though.

Moving on to our noise testing, due to the fact that the 290X reference cooler is based on the 7970’s reference cooler there’s little to surprise here. 41dB is by no means bad, but the 7970 never did particularly well here either, and neither will the 290X. This level of idle noise will not impress anyone concerned about the matter, especially when 2 GTX 780s in SLI is still quieter by 1.5dB. It’s going to be enough that the 290X is at least marginally audible at idle.

Having previously seen power consumption and temperatures under gaming, we finally get to what in most cases should be the most important factor: noise. In reusing the 7970’s reference cooler – having previously proven to be a mediocre design as far as noise is concerned – AMD has put themselves into a tough situation with the 290X. At 53.3dB the 290X is running at its 40% default fan speed limit, meaning we’re seeing both the worst case scenario for noise but also one that’s going to occur in every game. To that end it’s marginally quieter than the reference 7970 itself, and louder than everything else we’ve tested, including SLI setups.

At this point the 290X is 1.6dB louder than GTX 780 SLI, 3.1dB louder than GTX Titan, and a very significant 5.8dB louder than GTX 780. GTX 780 may border on overbuilt as far as cooling goes, but the payoff is in situations like this where the difference in noise under load is going to be very significant.

As an aside, for anyone wondering why the 290X in quiet mode and the 7970 have such similar noise levels under gaming workloads, there’s a good reason for that. The 290X quite mode’s 40% maximum fan speed was specifically chosen to match the noise characteristics of the original reference 7970, to lead to this exact outcome of it being no louder than the 7970. Meanwhile uber mode’s 55% maximum fan speed was chosen to match the noise characteristics of the reference 7970GE, which was never released in public and was absurdly loud.

Finally with FurMark, having already reached our 40% fan speed limit for the 290X we’re merely seeing every other card catch up. What little good news for the 290X here is that the gap between the GTX 780/Titan and the 290X closes to a hair over 1dB – a nearly insignificant difference – but it won’t change the fact that our gaming workload is a better representation of what to expect under a typical workload, and as a result a better representation of how much noisier the 290X is than the GTX 780 and its ilk.

In the end it’s clear that AMD needed to make tradeoffs to get the 290X out at its performance levels, and to do so at $550. That compromise has been in 290X’s power consumption, and more directly in the amount of noise 290X generates. Which is not to say that the power and noise situation fully negates what AMD has brought to the table in terms of price and performance – though it goes without saying we would have liked to see a better cooler – but it does mean buyers will need to act on those tradeoffs.

For a high end card the power consumption is not particularly concerning right now, but the noise issue will be a problem for some buyers. Everyone has their own cutoff of course, but in our book 53.3dB is at the upper range of reasonable noise levels, if not right at the edge. The 290X is no worse than (and no better than) the 7970 in this regard, which means we’re looking at an acceptable noise level that will work for some buyers and chassis and won’t work for others. For buyers specifically seeking out an ultra-quiet blower then there is no alternative to GTX 780, otherwise in the face of what 290X can do it would be very hard to justify a card $100 more expensive and $10 slower over these noise results. AMD still holds the edge overall, even if it’s not a clean sweep.

Up next, let’s talk about uber mode for a moment. We’ve focused on quiet mode for the bulk of our writeup not only because it’s the default mode, but because it’s the only mode that makes sense. Uber mode makes the 290X’s performance look even better, particularly in our most thermally stressful games, but ultimately the performance difference is never more than 5%. 5% is simply not worth the additional noise. It’s unfortunate that AMD is having to hold back the 290X’s performance like this to get noise levels to a reasonable level, but we simply can’t justify running the 290X that loud for a bit more performance.

It’s also for that reason that the 290X CF is in the tightest spot of them all, as AMD’s suggestion is that 290X CF users run in uber mode. 290X CF’s performance is great, but a pair of cards just compounds the problem. Short of running closed headphones, 290X CF in uber mode is just too much. 290X CF in quiet mode should be significantly better, just as how the single card configuration is, but that’s something we’ll have to look into at another time, as we didn’t have time to run that set of benchmarks for this article.

With all of the above in mind, we expect it will be interesting to see what AMD’s partners cook up once we see semi-custom and fully-custom designs hit the market. Open air coolers should handily outperform the AMD blower as far as noise is concerned – at the usual tradeoff of dumping that 300W of heat into the chassis – but we’d like to see one of AMD’s partners take a crack at a better blower. We’ve seen what kind of results NVIDIA can pull off with their high end blower; even if AMD won’t make such a high quality cooler the reference cooler, it would be to AMD’s benefit to have at least one partner offering something that can compete with the GTX 780 on the noise front while retaining the blower design. Whether we’ll see such a card however is another matter entirely.

Compute Final Words
Comments Locked

396 Comments

View All Comments

  • ninjaquick - Thursday, October 24, 2013 - link

    so 4-5% faster than Titan?
  • Drumsticks - Thursday, October 24, 2013 - link

    If the 780Ti is $599, then that means the 780 should see at least a $150 (nearly 25%!) price drop, which is good with me.
  • DMCalloway - Thursday, October 24, 2013 - link

    So, what you are telling me is Nvidia is going to stop laughing- all- the- way- to-the-bank and price the 780ti for less than current 780 prices? Current 780 owners are going to get HOT and flood the market with used 780's.
  • dragonsqrrl - Thursday, October 24, 2013 - link

    Why is it that this is only ever the case when Nvidia performs a massive price drop? Nvidia price drop = early adopters getting screwed (even though 780 has been out for ~6 months now). AMD price drop = great value for enthusiasts, go AMD! ... lolz.
  • Minion4Hire - Thursday, October 24, 2013 - link

    Titan is a COMPUTE card. A poor man's (relatively speaking) proper compute solution. The fact that it is also a great gaming card is almost incidental. No one needs a 6GB frame buffer for gaming right now. The Titan comparisons are nearly meaningless.

    The "nearly" part is the unknown 780 TI. Nvidia could enable the remaining CUs on 780 to at least give the TI comparable performance to Titan. But who cares that Titan is $1000? It isn't really relevant.
  • ddriver - Thursday, October 24, 2013 - link

    Even much cheaper radeons compeltely destroy the titan as well as every other nvidia gpu in compute, do not be fooled by a single, poorly implemented test, the nvidia architecture plainly sucks in double precision performance.
  • ShieTar - Thursday, October 24, 2013 - link

    Since "much cheaper" Radeons tend to deliver 1/16th DP performance, you seem to not really know what you are talking about. Go read up on a relevant benchmark suite on professional and compute cards, e.g. http://www.tomshardware.com/reviews/best-workstati... The only tasks where AMD cards shine are those implemented in OpenCL.
  • ddriver - Thursday, October 24, 2013 - link

    "Much cheaper" relative to the price of the titan, not entry level radeons... You clutched onto a straw and drowned...

    OpenCL is THE open and portable industry standard for parallel computing, did you expect radeons to shine at .. CUDA workloads LOL, I'd say OpenCL performance is all I really need, it has been a while since I played or cared about games.
  • Pontius - Tuesday, October 29, 2013 - link

    I'm in the same boat as you ddriver, all I care about is OpenCL in these articles. I go straight to that section usually =)
  • TheJian - Friday, October 25, 2013 - link

    You're neglecting the fact that everything you can do professionally in openCL you can already do faster in cuda. Cuda is taught in 600+ universities for a reason. It is in over 200 pro apps and has been funded for 7+yrs unlike opencl which is funded by a broke company hoping people will catch on one day :) Anandtech refuses to show cuda (gee they do have an AMD portal after all...LOL) but it exists and is ultra fast. You really can't name a pro app that doesn't have direct support or support via plugin for Cuda. And if you're buying NV and running opencl instead of cuda (like anand shows calling it compute crap) you're an idiot. Why don't they run Premiere instead of Sony crap for video editing? Because Cuda works great for years in it. Same with Photoshop etc...

    You didn't look at folding@home DP benchmark here in this review either I guess. 2.5x faster than 290x. As you can see it depends on what you do and the app you use. I consider F@H stupid use of electricity but that's just me...LOL. Find anything where OpenCL (or any AMD stuff, directx, opengl) beats CUDA. Compute doesn't just mean OpenCL, it means CUDA too! Dumb sites just push openCL because its OPEN...LOL. People making money use CUDA and generally buy quadro or tesla (they own 90% of the market for a reason, or people would just buy radeons right?).
    http://www.anandtech.com/show/7457/the-radeon-r9-2...
    DP in F@H here. Titan sort of wins right? 2.5x or so over 290x :) It's comic both here and toms uses a bunch of junk synthetic crap (bitmining, Asics do that now, basemark junk, F@H, etc) to show how good AMD is, but forget you can do real work with Cuda (heck even bitmining can be done with cuda)

    When you say compute, I think CUDA, not opencl on NV. As soon as you toss in Cuda the compute story changes completely. Unfortunately even Toms refuses to pit OpenCL vs. Cuda just like here at anandtech (but that's because both love OpenCL and hate proprietary stuff). But at least they show you in ShieTar's link (which craps out, remove the . at the end of the link) that Titan kills even the top quadro cards (it's a Tesla remember for $1500 off). It's 2x+ faster than quadro's in almost everything they tested. So yeah, Titan is very worth it for people who do PRO stuff AND game.
    http://www.tomshardware.com/reviews/best-workstati...
    For the lazy, fixed ShieTar's link.

    All these sites need to do is fire up 3dsmax, cinema4d, Blender, adobe (pick your app, After Effect, Premiere, Photoshop) and pit Cuda vs. OpenCL. Just pick an opencl plugin for AMD (luxrender) and Octane/furryball etc for NV then run the tests. Does AMD pay all these sites to NOT do this? I comment and ask on every workstation/vid card article etc at toms, they never respond...LOL. They run pure cuda, then pure opencl, but act like they never meet. They run crap like basemark for photo/video editing opencl junk (you can't make money on that), instead of running adobe and choosing opencl(or directx/opengl) for AMD and Cuda for NV. Anandtech runs Sony Vegas which a quick google shows has tons of problems with NV. Heck pit Sony/AMD vs. Adobe/NV. You can run the same tests in both on video, though it would be better to just use adobe for both but they won't do that until AMD gets done optimizing for the next rev...ROFL. Can't show AMD in a bad light here...LOL. OpenCL sucks compared to Cuda (proprietary or not...just the truth).

Log in

Don't have an account? Sign up now