PowerTune: Improved Flexibility & Fan Speed Throttling

The final new technology being introduced with Hawaii and 290X is the latest iteration of AMD’s PowerTune technology. Although not being given a formal name to differentiate it from previous incarnations of PowerTune, the latest iteration brings with it a number of important changes that will significantly alter how the 290X and future cards will behave and how those behaviors can be adjusted.

In a nutshell, with the latest iteration of PowerTune AMD is gaining the necessary hardware monitoring and adjustment abilities to modernize PowerTune, bringing it functionally up to par with NVIDIA’s GPU Boost 2.0, which itself was introduced earlier this year. This includes not only the ability to do fine grained clockspeed/voltage stepping, that alone being a major improvement over what Tahiti could do, but also far more flexible control over the video card to control it by power consumption, temperature, or even fan speeds/noise.

Diving right into matters, to once again use Tahiti as a baseline for comparison here, PowerTune as implemented on pre-GCN 1.1 cards like Tahiti has 3 (non-boost) or 4 (boost) power management clockspeed/voltage states. These are idle, intermediate (low-3D), high (full-3D), and for the cards that use it, boost. When for whatever reason PowerTune needed to clamp down on power usage to stay within the card’s designated limits, it could either jump states or merely turn down the clockspeed within a state, depending on how far over the throttle point the card was operating at. In practice state jumps were rare – it’s a big gap between high and intermediate – so for non-boost cards it would merely turn down the GPU clockspeed within the high state until power consumption was where it needed to be, while boost cards would either do the same within the boost state, or less frequently drop to the high state and then modulate.


Power States Available In Tahiti & Other GCN 1.0 GPUs

Modulating clockspeeds in such a manner is a relatively easy thing to implement, but it’s not without its drawbacks. That drawback being that semiconductor power consumption scales at a far greater rate with voltage than it does with clockspeed. So although turning down clockspeeds does reduce power consumption, it doesn’t do so by a large degree. If you want big power savings, you need to turn down the voltage too, and to do so in a fine grained manner.

Now given the limitations of Tahiti and other pre-GCN 1.1 cards, in order to implement fine grained power states significant changes needed to be made to both the GPU and the card, which is why AMD has not been able to bring this about until Hawaii and Bonaire. As power management is primarily handled by an external controller, the GPU needs to have a telemetry interface to provide the necessary data to the external controller and the ability/programming to quickly jump between states. Meanwhile the external controller needs to capable enough to handle the telemetry data (it’s a lot of data) and able to quickly switch between states (the faster the better).

With that in mind, for GCN 1.1 AMD set out to solve those problems by giving GCN 1.1 parts the necessary telemetry interface to be paired with equally capable 3rd party voltage controllers. Dubbed the Serial VID Interface (SVI2), the interface is the lynchpin of AMD’s latest iteration of PowerTune. In short, by adding this interface and thereby providing the necessary data to the external controller AMD finally has the ability to support a large number of states and to rapidly switch between them.

For the 290X and 260X, when combined with the IR 3567B controller AMD is currently using, this means translates into the ability to switch voltages as frequently as every 10 microseconds, and to do so by switching between upwards of 255 voltage steps. This massive increase in flexibility in turn allows AMD to control for power consumption, temperature, and even noise in ways that weren’t practical with the coarse grained power management features of GCN 1.0 cards.

With this level of flexibility in hand, AMD has significantly overhauled PowerTune, both with respect to how PowerTune operates and how the user can manipulate it. Starting under the hood, the inferred states used Tahiti and other GCN 1.0 GPUs are gone, replaced with a wide number of real power states, thereby giving AMD the ability to reduce power consumption in a fine grained manner with real voltage changes as opposed to resorting to ineffective clock speed modulation. Coupled with that is a new, relaxed (“fuzzy”) fan control scheme, which is based around the concept of slowing down the fan speed response time in order to avoid rapid changes in noise and pitch, and thereby avoiding drawing attention to the card (this being very similar to NVIDIA’s adaptive fan controller).

Equally significant however are the changes to the actual system management algorithms used by PowerTune. Taking a page from GPU Boost 2, now that AMD can properly step between a large number of voltage stages they’re also giving 290X cards the ability to throttle based on a larger number of conditions. On top of traditional power limit throttling, 290X in particular gains the ability to throttle based on explicit temperature limits, and even explicit fan speed limits.

Bringing this all together, for the first card to feature the full suite of these new capabilities AMD has set some very interesting throttle points that’s unlike anything they or NVIDIA have ever quite done before. Out of the box, in the card’s default “quiet” mode (more on modes later), the 290X has a 95C temperature throttle, a 40% fan speed throttle, and an unofficially estimated 300W power throttle. Meanwhile in the card’s alternative “uber” mode, those throttle points are 95C, 55% fan speed, and 300W respectively.

AMD Radeon R9 290X Throttle Points
Card Quiet Mode (Default) Uber Mode
Temperature 95C 95C
Fan Speed 40% (~2100 RPM) 55% (~3050 RPM)
Power (Estimated) 300W 300W

The addition of the fan speed throttle in turn is very much an X factor that changes how the entire system operates. Whereas previous AMD cards are primarily throttled by power and implicit temperature limits, and more recent NVIDIA cards are throttled by power and explicit temperature limits (with temperature serving as a proxy for fan speeds and noise) AMD takes this one step further by making the fan speed its own throttle, creating a new relationship between temperature and fan speeds that doesn’t exist in the old power management paradigms.

The end result of having the fan speed throttle is that for the 290X (and presumably future cards) the temperature throttle become a joint clause where both conditions have to be met to trigger throttling. So long as power limits are being met (you can never violate the power limit) a 290X will not throttle unless both the fan speed throttle point and the temperature throttle point is reached. And even then, the temperature throttle point has a direct impact on the behavior of the fan, with the GPU temperature (relative to the throttle point) being used as one of the principle inputs on fan speed. In that sense the temperature throttle point becomes a simple abstraction for the underlying fan curve itself.

Boost Throttle Priority: Power = Fan Speed + Temperature

Now there is one exception to this that’s worth pointing out. The above is applicable to the 290X’s boost states, which is where it should be spending all of its time under load. However if for whatever reason the card has to drop out of the boost states and revert to the base clockspeed state of 727MHz, then the relationship between fan speed and temperature becomes reversed, and the card will outright violate fan speed throttles in order to maintain the target temperature while also staying at the base clockspeed.

Base Throttle Priority: Power = Temperature > Fan Speed

The end result of this scheme is that for the bulk of gaming scenarios the 290X will be throttled not on power consumption or even by temperature alone (since you will eventually always hit 95C in Quiet mode), but rather on fan speed/noise, a method unlike anything NVIDIA or AMD have done previously. By doing this AMD has established a direct, simple relationship between performance and noise. If a card is too loud, it can be turned down at the cost of performance. Or if a card needs more performance, then it can be increased (to a point) at the cost of noise. And as noise is going to be the most visible aspect of the power/temp/noise triumvirate to the end user, this in turn gives the end user a high level of control over what’s usually the biggest drawback to running a high power, high performance video card. It really is that much better than any of the management paradigms that have come before it, and it is something we’d fully expect NVIDIA to copy in due time.

Before moving on from the subject of throttling however, let’s briefly touch on what’s undoubtedly going to prove to be a controversial element to the 290X’s power tune implementation: AMD’s 95C temperature throttle. Simply put, no desktop 28nm card thus far has been designed/intended to operate at such a high sustained temperature by default. NVIDIA’s explicit throttle point for the 700 series is 80C, and AMD’s implicit throttle point for Tahiti cards is also in the 80C range, putting both cards well below 95C under regular operation. Now to be clear both are spec’ed to allow temperatures up to 95C (i.e. TjMax), however that 95C throttle point is not the point where either party has previously designed their equilibrium points around.

So why the sudden change on AMD’s behalf? There are a few reasons for it. But first and foremost, let’s talk about the physical costs of higher temperatures. All other elements being held equal, temperatures affect silicon devices in 3 important ways: longevity, power consumption (leakage), and attainable clockspeeds. For longevity there’s a direct relationship between temperature and the electromigration effect, with higher temperatures causing electromigration and ultimately ASIC failure to occur sooner than lower temperatures. For power consumption there is a direct relationship between temperature and power consumption, such that higher temperatures will increase static transistor leakage and therefore increase power consumption, even under identical workloads. And finally, there is a weak relationship between temperature and attainable clockspeeds, such that the switching performance of silicon transistors drop as they become warmer, making it harder to attain high clockspeeds (which is part of the reason why record setting overclocks are with GPUs well into the negative Celsius range).


An example of the temperature versus power consumption principle on an Intel Core i7-2600K. Image Credit: AT Forums User "Idontcare"

The important part to take away from all of this however is that these relationships occur across the entire range of temperatures a product is rated to operate under, and more importantly that all of these factors are taken into consideration in product planning. The 95C maximum operating temperature that most 28nm devices operate under is well understood by engineering teams, along with the impact to longevity, power consumption, and clockspeeds when operating both far from it and near it. In other words, there’s nothing inherently wrong with letting an ASIC go up to 95C so long as it’s appropriately planned for. And this, more than anything else, is what has changed for 290X and Hawaii.

As a second wind product, one of the biggest low-level changes AMD has made to Hawaii relative to Tahiti is that they have been able to significantly clamp down on their leakage. Not that Tahiti was a particularly leaky chip (and not that it was particularly leakless either), but as the first GPU to roll out of TSMC it was very conservatively designed and had to be able to deal with the leakage and other nagging issues that come with an immature fabrication process. Hawaii in turn is designed against a very mature 28nm process, and designed in such a way that AMD doesn’t have to be conservative. As a result Hawaii’s leakage, though not quantified, is said to be notably reduced versus Tahiti.

What this means for 290X in turn is that one of the biggest reasons for keeping temperatures below 95C has been brought under control. AMD no longer needs to keep temperatures below 95C in order to avoid losing significant amounts of performance to leakage. From a performance perspective it has become “safe” to operate at 95C.

Meanwhile from a longevity perspective, while the underlying silicon hasn’t necessarily changed AMD’s understanding of ASIC longevity on TSMC’s 28nm process has. Nearly two years of experience in shipping 28nm GPUs means that AMD has hard evidence for how long a GPU can last at various temperatures, and the maturation of the 28nm process in turn has extended that longevity by improving both the quality and consistency of the GPUs that come out of it. Ultimately there is always going to be a longevity cost to increasing temperatures – and only AMD knows what that cost is – but as the entity ultimately responsible for warrantying their GPUs, at this point AMD is telling us that Hawaii will meet all of their longevity requirements even with the higher operating temperatures.

With that in mind, why would AMD even want to increase their operating temperatures to 95C? In short, to take full advantage of Newton’s Law of Cooling. Newton’s Law of Cooling dictates that the greater the gradient between a heat source and its environment, the more heat energy can be transferred. Or in other words, AMD is able to remove more heat energy from the GPU with the same cooling apparatus simply by operating at a higher temperature. Ergo a 290X operating at 95C can consume more power (operate at greater performance levels) while requiring no increase in cooling (noise) over what a 290X that operates at a lower temperature would require.

Now admittedly none of this makes 95C any less unsettling when first looking at temperatures, as we have become accustomed to 80C range temperatures over the years. But so long as the longevity of Hawaii matches AMD’s claims then this ultimately won’t be an issue. 95C will just be a number, and high ASIC temperatures will be another tool to maximize cooling performance. With that in mind, it will be interesting to see what AMD’s board partners do with their eventual custom Hawaii designs, assuming that they follow the same cooling paradigm as AMD. How much quieter would a Gigabyte Windforce or Asus DirectCU II based Hawaii card be able to operate if it was allowed to (and capable of) operating at 95C sustained? The answer to that, we expect, should prove to be a lot of fun.

Having established in detail how the latest iteration of PowerTune works, let’s finally talk about how this iteration of PowerTune will affect end-user tweaking and overclocking.

As to be expected, AMD has opted to expose all of their new PowerTune power controls via their Overdrive control panel, and as such users have full control over both overclocking and throttle controls. On the throttle side this includes both the traditional power limit controls, and new controls to set the target GPU temperature and the maximum fan speed. These follow the rules we noted earlier, so adjusting the GPU temperature target for example causes the fan speed to ramp up more quickly, or bringing down the maximum fan speed will result in a greater throttle on overall performance.

Meanwhile overclocking controls have also received a facelift, and unlike the throttle controls we’ve having a harder time getting behind these changes. In short, Overdrive now adjusts the GPU and memory clockspeed on a relative percentage basis rather than an absolute frequency basis. On the one hand this brings consistency with how power adjustments have always worked, and yet on the other hand we can’t help but feel that percentage based overclocking is decidedly unhelpful and unintuitive. 10% is far less meaningful than 100MHz in this context, and it’s going to get even worse once we see factory overclocked cards and multiple tiers of Hawaii cards. Consequently we’d really rather have the original absolute frequency basis controls back. AMD is simply abstracting clockspeeds by too much.

Finally, along with the traditional sliders and settings boxes, AMD has introduced one final graphical element into Overdrive, and that is a 2D heatmap for overclocking. Placing the power limit on the X axis and the GPU clockspeed on the Y axis, the heatmap provides a simple graphical representation of the impact is of adjusting those values. The heatmap is a bit imprecise, and I suspect most seasoned overclockers will stick to punching in numbers directly, but otherwise it’s a nifty simplification of overclocking.

With the above in mind, the last factor we’re waiting to see play out is how 3rd party utilities such as MSI’s Afterburner choose to implement these new controls. AMD meets and exceeds GPU Boost 2.0 with respect to flexibility, but monitoring/reporting was never a strong suit for Overdrive. Just based on our own experiences in putting this article together, an equivalent to NVIDIA’s “reason” throttling flags would be incredibly helpful as it’s not always obvious why the 290X is throttling, especially if it’s throttling for power reasons. If AMD can provide that data to 3rd party utilities, then combined with the rest of the functionality we’ve seen they would have an unquestionable claim to bragging rights on whose power management technology is better.

XDMA: Improving Crossfire Meet The Radeon R9 290X
Comments Locked

396 Comments

View All Comments

  • 46andtool - Thursday, October 24, 2013 - link

    I dont know where your getting your information but your obviously nvidia biased because its all wrong. AMD is known for using poor reference coolers, once manufactures like sapphire and HIS roll out there cards in a couple weeks Im sure the noise and heat wont be a problem. and the 780ti is poised to be between a 780gtx and a titan, it will not be faster than a 290x, sorry. We already have the 780ti's specs..what Nvidia needs to focus on is dropping its insane pricing.
  • SolMiester - Monday, October 28, 2013 - link

    Sorry bud, but the Ti will be much faster than Titan, otherwise there is no point, hell even the 780OC is enough to edge the Titan. Why are people going on about Titan, its a once in a blue moon product to fill a void that AMD left open with CUDA dev for prosumers...Full monty with perhaps 7ghz memory, wahey!
  • Samus - Friday, October 25, 2013 - link

    What in the world makes you think the 780Ti will be faster than Titan? That's ridiculous. What's next, a statement that the 760Ti will be faster than the 770?
  • TheJian - Friday, October 25, 2013 - link

    http://www.techradar.com/us/news/computing-compone...
    Another shader and more mhz.
    http://news.softpedia.com/news/NVIDIA-GeForce-GTX-...
    If the specs are true quite a few sites think it will be faster than titan.
    http://hexus.net/tech/news/graphics/61445-nvidia-g...
    Check the table. 780TI would win in gflops if leak is true. The extra 80mhz+1SMX mean it should either tie or barely beat it in nearly everything.

    Even a tie at $650 would be quite awesome at less watts/heat/noise possibly. Of course it will be beat a week later buy a fully unlocked titan ultra or more mhz, or mhz+fully unlocked. NV won't just drop titan. They will make a better one easily. It's not like NV just sat on their butts for the last 8 months. It's comic anyone thinks AMD has won. OK, for a few weeks tops (and not really even now other than price looking at 1080p and the games I mentioned previously).
  • ShieTar - Thursday, October 24, 2013 - link

    It doesn't cost less than a GTX780, it only has a lower MSRP. The actual price for which you can buy a GTX780 is already below 549$ today, so as usual you pay the same price for the same performance with both companies.

    And testing 4K gaming is important right now, but it should be another 3-5 years before 4K performance actually impacts sales figures in any relevant way.

    And about Titan? Keep in mind that it is 8 months old, still has one SMX disabled (unlike the Quadro K6000), and still uses less power in games than the 290X. So I wouldn't be surprised to see a Titan+ come out soon, with 15 SMX and higher base clocks, and as Ryan puts it in this article "building a refined GPU against a much more mature 28nm process". That should be enough to gain 10%-15% performance in anything restricted by computing power, thus showing a much more clear lead over the 290X.

    The only games that the 290X will clearly win are those that are restricted by memory bandwidth. But nVidia have proven with the 770 that they can operate memory at 7GHz as well, so they can increase Titans bandwidth by 16% through clocks alone.

    Don't get me wrong, the 290X looks like a very nice card, with a very good price to it. I just don't think nVidia has any reason to worry, this is just competition as usual, AMD have made their move, nVidia will follow.
  • Drumsticks - Thursday, October 24, 2013 - link

    http://www.newegg.com/Product/ProductList.aspx?Sub...

    Searched on Newegg, sorted by lowest price, lowest one was surprise! $650. I don't think Newegg is over $100 off in their pricing with competitors.
  • 46andtool - Thursday, October 24, 2013 - link

    http://www.newegg.com/Product/Product.aspx?Item=N8...

    your clearly horrible at searching
  • TheJian - Friday, October 25, 2013 - link

    $580 isn't $550 though right? And out of stock. I wonder how many of these they can actually make seeing how hot it is already in every review pegged 94c. Nobody was able to OC it past 1125. They're clearly pushing this thing a lot already.
  • ShieTar - Friday, October 25, 2013 - link

    Well, color me surprised. I admittedly didn't check the US market, because for more than a decade now, electronics used to be sold in the Euro-Region with a price conversion assumption of 1€=1$, so everything was about 35% more expensive over here (but including 19% VAT of course).

    So for this discussion I used our German comparison engines. Both the GTX780 and the R290X are sold for the same price of just over 500€ over here, which is basically 560$+19%VAT. I figured the same price policies would apply in the US, it basically always does.

    Well, as international shipping is rarely more that 15$, it would seem like your cheapest way to get a 780 right now is to actually import it from Germany. Its always been the other way around with electronics, interesting to see it the other way around for once.
  • 46andtool - Thursday, October 24, 2013 - link

    the price of a 780gtx is not below $649 unless you are talking about a refurbished or open box card.

Log in

Don't have an account? Sign up now