PowerTune, Cont

PowerTune’s functionality is accomplished in a two-step process. The first step is defining the desired TDP of a product. Notably (and unlike NVIDIA) AMD is not using power monitoring hardware here, citing the costs of such chips and the additional design complexities they create. Instead AMD is profiling the performance of their GPUs to determine what the power consumption behavior is for each functional block. This behavior is used to assign a weighted score to each functional block, which in turn is used to establish a rough equation to find the power consumption of the GPU based on each block’s usage.

AMD doesn’t provide the precise equations used, but you can envision it looking something like this:

Power Consumption =( (shaderUsage * shaderWeight) + (ropUsage * ropWeight) + (memoryUsage * memoryWeight) ) * clockspeed

In the case of the Radeon HD 6970, the TDP is 250W, while the default clockspeed is 880MHz.

With a power equation established, AMD can then adjust GPU performance on the fly to keep power consumption under the TDP. This is accomplished by dynamically adjusting just the core clock based on GPU usage a few times a second. So long as power consumption stays under 250W the 6970 stays at 880MHz, and if power consumption exceeds 250W then the core clock will be brought down to keep power usage in check.

It’s worth noting that in practice the core clock and power usage do not have a linear relationship, so PowerTune may have to drop the core clock by quite a bit in order to maintain its power target. The memory clock and even the core voltage remain unchanged (these are only set with PowerPlay states), so PowerTune only has the core clock to work with.

Ultimately PowerTune is going to fundamentally change how we measure and classify AMD’s GPUs. With PowerTune the TDP really is the TDP; as a completely game/application agonistic way of measuring and containing power consumption, it’s simply not possible to exceed the TDP. The power consumption of the average game is still below the TDP – sometimes well below – so there’s still an average case and a worst case scenario to discuss, but the range between them just got much smaller.

Furthermore as a result, real world performance is going to differ from theoretical performance that much more. Just as is the case with CPUs where the performance you get is the performance you get; teraFLOPs, cache bandwidth, and clocks alone won’t tell you everything about the performance of a product. The TDP and whether the card regularly crosses it will factor in to performance, just as how cooling factors in to CPU performance by allowing/prohibiting higher turbo modes. At least for AMD’s GPUs, we’re now going to be talking about how much performance you can get for any given TDP instead of specific clockspeeds, bringing performance per watt to the forefront of importance.

So by now you’re no doubt wondering what the impact of PowerTune is, and the short answer is that there’s virtually no impact. We’ve gone ahead and compiled a list of all the games and applications in our test suite, and whether they triggered PowerTune throttling. Of the dozen tests, only two triggered PowerTune: FurMark as expected, and Metro 2033. Furthermore as you can see there was a significant difference between the average clockspeed of our 6970 in these two situations.

AMD Radeon HD 6970 PowerTune Throttling
Game/Application Throttled?
Crysis: Warhead No
BattleForge No
Metro Yes (850Mhz)
HAWX No
Civilization V No
Bad Company 2 No
STALKER No
DiRT 2 No
Mass Effect 2 No
Wolfenstein No
3DMark Vantage Yes
MediaEspresso 6 No
Unigine Heaven No
FurMark Yes (600MHz)
Distributed.net Client No

In the case of Metro the average clockspeed was 850MHz; Metro spent 95% of the time running at 880MHz, and only at a couple of points did the core clock drop to around 700MHz. Conversely FurMark, a known outlier, drove the average core clock down to 600MHz for a 30% reduction in the core clock. So while PowerTune definitely had an impact on FurMark performance it did almost nothing to Metro, never mind any other game/application. To illustrate the point, here are our Metro numbers with and without PowerTune.

Radeon HD 6970: Metro 2033 Performance
PowerTune 250W PowerTune 300W
2560x1600 25.5 26
1920x1200 39 39.5
1680x1050 64.5 65

The difference is no more than .5fps on average, which may as well be within our experimental error range for this benchmark. For everything we’ve tested on the 6970 and the 6950, the default PowerTune settings do not have a meaningful performance impact on any game or application we test. Thus at this point we’re confident that there are no immediate drawbacks to PowerTune for desktop use.

Ultimately this is a negative feedback mechanism, unlike Turbo which is a positive feedback mechanism. Without overclocking the best a 6970 will run at is 880MHz, whereas Turbo would increase clockspeeds when conditions allow. Neither one is absolutely the right way to do things, but there’s a very different perception when performance is taken away, versus when performance is “added” for free. I absolutely like where this is going – both as a hardware reviewer and as a gamer – but I’d be surprised if this didn’t generate at least some level of controversy.

Finally, while we’ve looked at PowerTune in the scope of desktop usage, we’ve largely ignored other cases so far. AMD will be the first to tell you that PowerTune is more important for mobile use than it is desktop use, and mobile use is all the more important as the balance between desktops and laptops sold continues to slide towards laptops. In the mobile space not only does PowerTune mean that AMD will absolutely hit their TDPs, but it should allow them to produce mobile GPUs that come with higher stock core clocks, comfortable in the knowledge that PowerTune will keep power usage in check for the heaviest games and applications. The real story for PowerTune doesn’t even begin until 2011 – as far as the 6900 series is concerned, this may as well be a sneak peak.

Even then there’s one possible exception we’re waiting to see: 6990 (Antilles). The Radeon HD 5970 put us in an interesting spot: it was and still is the fastest card around, but unless you can take advantage of CrossFire it’s slower than a single 5870, a byproduct of the fact that AMD had to use lower core and memory clocks to make their 300W TDP. This is in stark comparison to the 4870X2, which really was 2 4870s glued together with the same single GPU performance. With PowerTune AMD doesn’t necessarily need to repeat the 5970’s castrated clocks; they could make a 6970X2, and let PowerTune clip performance as necessary to keep it under 300W. If something is being used without CrossFire for example, then there’s no reason not to run the 1 GPU at full speed. It would be the best of both worlds.

In the meantime we’re not done with PowerTune quite yet. PowerTune isn’t just something AMD can set – it’s adjustable in the Overdrive control panel too.

Redefining TDP With PowerTune Tweaking PowerTune
Comments Locked

168 Comments

View All Comments

  • 529th - Sunday, December 19, 2010 - link

    Great job on this review. Excellent writing and easy to read.

    Thanks
  • marc1000 - Sunday, December 19, 2010 - link

    yes, that's for sure. we will have to wait a little to see improvements from VLIW4. but my point is the "VLIW processors" count, they went up by 20%. with all other improvements, I was expecting a little more performance, just that.

    but in the other hand, I was reading the graphs, and decided that 6950 will be my next card. it has double the performance of 5770 in almost all cases. that's good enough for me.
  • Iketh - Friday, December 24, 2010 - link

    This is how they've always reviewed new products? And perhaps the biggest reason AT stands apart from the rest? You must be new to AT??
  • WhatsTheDifference - Sunday, December 26, 2010 - link

    the 4890? I see every nvidia config, never a card overlooked there, ever, but the ATI's (then) top card is conspicuously absent. long as you include the 285, there's really no excuse for the omission. honestly, what's the problem?
  • PeteRoy - Friday, December 31, 2010 - link

    All games released today are in the graphic level of the year 2006, how many games do you know that can bring the most out of this card? Crysis from 2007?
  • Hrel - Tuesday, January 11, 2011 - link

    So when are all these tests going to be re-run at 1920x1080 cause quite frankly that's what I'm waiting for. I don't care about any resolution that doesn't work on my HDTV. I want 1920x1080, 1600x900 and 1280x720. If you must include uber resolutions for people with uber money then whatever; but those people know to just buy the fastest card out there anyway so they don't really need performance numbers to make up their mind. Money is no object so just buy nvidia's most expensive card and ur off.
  • AKP1973 - Thursday, October 13, 2011 - link

    Have you guys noticed the "load GPU temp" of the 6870 in XFIRE?... It produced so very low heat than any enthusiast card in a multi-GPU setup. That's one of the best XFIRE card in our time today if you value price, performance, cool temp, and silence.!
  • Travisryno - Wednesday, April 26, 2017 - link

    It's dishonest referring to enhanced 8x as 32x. There are industry standards for this, which AMD, NEC, 3DFX, SGI, SEGA AM2, etc..(everybody) always follow(ed), then nVidia just makes their own...
    Just look how convoluted it is..

Log in

Don't have an account? Sign up now