Tweaking PowerTune

While the primary purpose of PowerTune is to keep the power consumption of a video card within its TDP in all cases, AMD has realized that PowerTune isn’t necessarily something everyone wants, and so they’re making it adjustable in the Overdrive control panel. With Overdrive you’ll be able to adjust the PowerTune limits both up and down by up to 20% to suit your needs.

We’ll start with the case of increasing the PowerTune limits. While AMD does not allow users to completely turn off PowerTune, they’re offering the next best thing by allowing you to increase the PowerTune limits. Acknowledging that not everyone wants to keep their cards at their initial PowerTune limits, AMD has included a slider with the Overdrive control panel that allows +/- 20% adjustment to the PowerTune limit. In the case of the 6970 this means the PowerTune limit can be adjusted to anywhere between 200W and 300W, the latter being the ATX spec maximum.

Ultimately the purpose of raising the PowerTune limit depends on just how far you raise it. A slight increase can bring a slight performance advantage in any game/application that is held back by PowerTune, while going the whole nine yards to 20% is for all practical purposes disabling PowerTune at stock clocks and voltages.

We’ve already established that at the stock PowerTune limit of 250W only FurMark and Metro 2033 are PowerTune limited, with only the former limited in any meaningful way. So with that in mind we increased our PowerTune limit to 300W and re-ran our power/temperature/noise tests to look at the full impact of using the 300W limit.

Radeon HD 6970: PowerTune Performance
PowerTune 250W PowerTune 300W
Crysis Temperature 78 79
Furmark Temperature 83 90
Crysis Power 340W 355W
Furmark Power 361W 422W

As expected, power and temperature both increase with FurMark with PowerTune at 300W. At this point FurMark is no longer constrained by PowerTune and our 6970 runs at 880MHz throughout the test. Overall our power consumption measured at the wall increased by 60W, while the core clock for FurMark is 46.6% faster. It was under this scenario that we also “uncapped” PowerTune for Metro, when we found that even though Metro was being throttled at times, the performance impact was impossibly small.

Meanwhile we found something interesting when running Crysis. Even though Crysis is not impacted by PowerTune, Crysis’ power consumption still crept up by 15W. Performance is exactly the same, and yet here we are with slightly higher power consumption. We don’t have a good explanation for this at this point – PowerTune only affects the core clock (and not the core voltage), and we never measured Crysis taking a hit at 250W or 300W, so we’re not sure just what is going on. However we’ve already established that FurMark is the only program realistically impacted by the 250W limit, so at stock clocks there’s little reason to increase the PowerTune limit.

This does bring up overclocking however. Due to the limited amount of time we had with the 6900 series we have not been able to do a serious overclocking investigation, but as clockspeed is a factor in the power equation, PowerTune is going to impact overclocking. You’re going to want to raise the PowerTune limit when overclocking, otherwise PowerTune is liable to bring your clocks right back down to keep power consumption below 250W. The good news for hardcore overclockers is that while AMD set a 20% limit on our reference cards, partners will be free to set their own tweaking limits – we’d expect high-end cards like the Gigabyte SOC, MSI Lightning, and Asus Matrix lines to all feature higher limits to keep PowerTune from throttling extreme overclocks.

Meanwhile there’s a second scenario AMD has thrown at us for PowerTune: tuning down. Although we generally live by the “more is better” mantra, there is some logic to this. Going back to our dynamic range example, by shrinking the dynamic power range power hogs at the top of the spectrum get pushed down, but thanks to AMD’s ability to use higher default core clocks, power consumption of low impact games and applications goes up. In essence power consumption gets just a bit worse because performance has improved.

Traditionally V-sync has been used as the preferred method of limiting power consumption by limiting a card’s performance, but V-sync introduces additional input lag and the potential for skipped frames when triple-buffering is not available, making it a suboptimal solution in some cases. Thus if you wanted to keep a card at a lower performance/power level for any given game/application but did not want to use V-sync, you were out of luck unless you wanted to start playing with core clocks and voltages manually. By being able to turn down the PowerTune limits however, you can now constrain power consumption and performance on a simpler basis.

As with the 300W PowerTune limit, we ran our power/temperature/noise tests with the 200W limit to see what the impact would be.

Radeon HD 6970: PowerTune Performance
PowerTune 250W PowerTune 200W
Crysis Temperature 78 71
Furmark Temperature 83 71
Crysis Power 340W 292W
Furmark Power 361W 292W

Right off the bat everything is lower. FurMark is now at 292W, and quite surprisingly Crysis is also at 292W. This plays off of the fact that most games don’t cause a card to approach its limit in the first place, so bringing the ceiling down will bring the power consumption of more power hungry games and applications down to the same power consumption levels as lesser games/applications.

Although not whisper quiet, our 6970 is definitely quieter at the 200W limit than the default 250W limit thanks to the lower power consumption. However the 200W limit also impacts practically every game and application we test, so performance is definitely going to go down for everything if you do reduce the PowerTune limit by the full 20%.

Radeon HD 6970: PowerTune Crysis Performance
PowerTune 250W PowerTune 200W
2560x1600 36.6 28
1920x1200 51.5 43.3
1680x1050 63.3 52

At 200W, you’re looking at around 75%-80% of the performance for Crysis. The exact value will depend on just how heavy of a load the specific game/application was in the first place.

PowerTune, Cont Another New Anti-Aliasing Mode: Enhanced Quality AA
Comments Locked

168 Comments

View All Comments

  • versesuvius - Friday, December 17, 2010 - link

    Ananke,

    I am not very knowledgeable about this, but I don't think a modern GPU can fit inside a CPU for now. A better idea would be a console on a card. The motherboards on the consoles are not much bigger than the large graphic cards of today. A console card for $100 would be great. I am sure that there is no technical obstacles that the average electronic wizard cannot overcome, doing that.

    Sure, there is a use for everything. I can imagine that every single human being on earth can find a use for a Ferrari, but the point is that even those who do have it, do not use it as often as their other car, (Toyota, VW or whatever). In fact, there is rarely a Ferrari that has more than 20,000 km on it, and even that is put on it by successive owners, not one. The average total an ordinary person can stand a Ferrari is 5000 KM. (Disclaimer: I do not have one. I only read something to that effect somewhere). Having said that, I do have a sense of the "need for speed". I can remember sitting in front of the university's 80286 waiting for the FE program to spit out the results, one node at a time, click, click, ... . You have millions of polygons, we can have billions of mesh nodes, and that even does not even begin to model a running faucet. How's that for the need for speed. I do appreciate the current speeds. However, the CPU deal was and is a straight one. The graphic card deals, today, are not. To be clear, the "and" in "High End"s and "Fool"s is an inclusive one. "Someone will pay for it", was also initiated in the eighties of the last century. By the way, the big question "can it play crysis", will no longer be. Crysis 2 is coming to the consoles.
  • Quidam67 - Friday, December 17, 2010 - link

    "But can it play Crysis" should be in the Urban dictionary as a satirical reference on graphics code that combines two potent attributes: 1) is way ahead of its time in terms of what current hardware can support 2) is so badly written and optimised that even hardware that should be able to run it still can't.

    In 1000 years time when Organic Graphics cards that you can plug into your head still can't run it smoothly @2560*1600 60fps they will realise the joke was on us and that the code itself was written to run more and more needless loops in order to overwhelm any amount of compute-resource thrown at it.
  • Iketh - Friday, December 24, 2010 - link

    LOL
  • marc1000 - Friday, December 17, 2010 - link

    I swear I've read ALL the comments to see if anyone already pointed it... but no one did.

    I feel a bit disappointed with this launch too (I have a 5770 and wanted to get 6950 but was wanting a bigger increase %-wise). But one thing interesting it the number of Stream Processors in the new gpus. By the "pure processor" count this number decreased from 1600 SPs on 5870 to 1536 SPs on 6970. But the size of the VLIW processors changed too. It was 5 SPs on 5870 and now is 4 SPs.

    So we have:
    hd5870 = 1600 SPs / 5 = 320 "processors"
    hd6970 = 1536 SPs / 4 = 384 "processors"

    if we take that 384 and multiply by 5, we would have 1920 SPs on the new generation (on par with many rumors). this is 20% more shaders. and considering AMD is saying that the new VLIW4 is 10% faster than VLIW5 we should have more than 20% increase in all situations. but this is only true in the minority of tests (like crysis at 2560x1660 where it is 24%, but in the same game at 1680x1050 the increase is only 16%). and at the same time the minimun FPS got better, yet in another games the difference is smaller.

    but then again, I was expecting a little more. I believe the 6950 will be a worthy upgrade to me, but the expectations were so high that too much people ended a little disappointed... myself included.
  • Sunburn74 - Tuesday, December 28, 2010 - link

    Well... at least they delivered on time and didn't make you wait 6 more months to simply deliver an equivalent, if not considerably worse, product.
  • Mr Perfect - Friday, December 17, 2010 - link

    Yes, the minimums are appreciated when they're included.

    It would be even better if the framerates was displayed as a line graph instead of a bar graph. That way readers could tell if an average consisted of a lot of high peaks and low valleys, or really was a nice smooth experience all the way through. Some other review sites use linegraphs and while I visit Anandtech for it's timeliness, professionalism, industry insight and community involvement, I go to the other sites for the actual performance numbers.
  • Quidam67 - Friday, December 17, 2010 - link

    There is further rationale for splitting the article. Lets say someone is googling "HD 6970 architecture" perhaps they will pick up this review, or perhaps they won't, but either way, if they see that it is actually a review on the cards, they might be inclined to bypass it in favour of a more focused piece.

    And again, there is no reason why the Architecture Article can't provide a hyperlink to the review, if the reader then decides they want to see how that architecture translates into performance on the current generation of cards supporting it.

    I really hope AT are reading this and giving it some consideration. As you say, they are a great sight and no one is disputing that, but it's not a religion, so you should be allowed to question it without being accused of blasphemy :O)
  • dustcrusher - Friday, December 17, 2010 - link

    It really comes down to how important the mainstream market is. If they are a large enough segment of the market, one company using a simple, easy-to-grasp naming convention would likely grab some market share. Make it easy to buy your product and at least some people will be more likely to do so.

    If not, then it's fun to talk about but not terribly important. Tech-savvy folk will buy whatever meets their needs price/performance-wise after doing research, even if a card is named the Transylvania 6-9000 or the Wankermeister GTFO. Eager to please tech-naive folk are going to buy the largest model number they can get with the money they have, because "larger model numbers = bigger/better equipment" is a long-established consumer shorthand.

    I have a half-baked idea for a model numbering system that's based around the key specs of the card- it's a 5 digit system where the first digit is the hardware platform ID (like what we have now, mostly) and the other four would represent combinations of other specs (one digit could be the lowest memory clock speed and bus width would be 1, the next lowest memory clock speed and lowest bus width would be 2, etc).

    No idea if this could actually be implemented- there are probably too many variables with GPU/memory clock speeds, among other things.
  • Shinobi_III - Saturday, December 18, 2010 - link

    If you ever saw Nvidia 4xAA in action, you know it's not as smooth as the radeon implementation (especially in motion) and z-buffer miscalculations has always been a nvidia feature.

    Go up a hill in Fallout New Vegas and look at Vegas in the horizon, with Nvidia cards it always looks like a disco due to meshes overlapping. Now do the same on Radeon.
  • TheUsual - Saturday, December 18, 2010 - link

    Right now, Newegg has a 6870 for $200 after rebate. Two of these makes for an awesome value at $400. The top tier of cards doesn't give a corresponding increase in performance for the extra cost. Two 6950s costs 50% more but does not give you 50% more FPS. Two GTX 460 1GBs is also a great bang for the buck at $300.

    Neither of these lets you do triple SLI/XFIRE however. That would be what would be paying extra for.

    My hope is that the price will drop on the 6950 by around February. By then the GTX 560 should be out and might drive prices down some. The benchmarks could change some with Sandy Bridge too, if they are currently CPU bound.

Log in

Don't have an account? Sign up now