Redefining TDP With PowerTune

One of our fundamental benchmarks is FurMark, oZone3D’s handy GPU load testing tool. The furry donut can generate a workload in excess of anything any game or GPGPU application can do, giving us an excellent way to establish a worst case scenario for power usage, GPU temperatures, and cooler noise. The fact that it was worse than any game/application has ruffled both AMD and NVIDIA’s feathers however, as it’s been known to kill older cards and otherwise make their lives more difficult, leading to the two companies labeling the program a “power virus”.

FurMark is just one symptom of a larger issue however, and that’s TDP. Compared to their CPU counterparts at only 140W, video cards are power monsters. The ATX specification allows for PCIe cards to draw up to 300W, and we quite regularly surpass that when FurMark is in use. Things get even dicier on laptops and all-in-one computers, where compact spaces and small batteries limit how much power a GPU can draw and how much heat can effectively be dissipated. For these reasons products need to be designed to meet a certain TDP; in the case of desktop cards we saw products such as the Radeon HD 5970 where it had sub-5870 clocks to meet the 300W TDP (with easy overvolting controls to make up for it), and in laptop parts we routinely see products with many disabled functional units and low clocks to meet those particularly low TDP requirements.

Although we see both AMD and NVIDIA surpass their official TDP on FurMark, it’s never by very much. After all TDP defines the thermal limits of a system, so if you regularly surpass those limits it can lead to overwhelming the cooling and ultimately risking system damage. It’s because of FurMark and other scenarios that AMD claims that they have to set their products’ performance lower than they’d like. Call of Duty, Crysis, The Sims 3, and other games aren’t necessarily causing video cards to draw power in excess of their TDP, but the need to cover the edge cases like FurMark does. As a result AMD has to plan around applications and games that cause a high level of power draw, setting their performance levels low enough that these edge cases don’t lead to the GPU regularly surpassing its TDP.

This ultimately leads to a concept similar to dynamic range, defined by Wikipedia as: “the ratio between the largest and smallest possible values of a changeable quantity.” We typically use dynamic range when talking about audio and video, referring to the range between quiet and loud sounds, and dark and light imagery respectively. However power draw is quite similar in concept, with a variety of games and applications leading to a variety of loads on the GPU. Furthermore while dynamic range is generally a good thing for audio and video, it’s generally a bad thing for desktop GPU usage – low power utilization on a GPU-bound game means that there’s plenty of headroom for bumping up clocks and voltages to improve the performance of that game. Going back to our earlier example however, a GPU can’t be set this high under normal conditions, otherwise FurMark and similar applications will push the GPU well past TDP.

The answer to the dynamic power range problem is to have variable clockspeeds; set the clocks low to keep power usage down on power-demanding games, and set the clocks high on power-light games. In fact we already have this in the CPU world, where Intel and AMD use their turbo modes to achieve this. If there’s enough thermal and power headroom, these processors can increase their clockspeeds by upwards of several steps. This allows AMD and Intel to not only offer processors that are overall faster on average, but it lets them specifically focus on improving single-threaded performance by pushing 1 core well above its normal clockspeeds when it’s the only core in use.

It was only a matter of time until this kind of scheme came to the GPU world, and that time is here. Earlier this year we saw NVIDIA lay the groundwork with the GTX 500 series, where they implemented external power monitoring hardware for the purpose of identifying and slowing down FurMark and OCCT; however that’s as far as they went, capping only FurMark and OCCT. With Cayman and the 6900 series AMD is going to take this to the next step with a technology called PowerTune.

PowerTune is a power containment technology, designed to allow AMD to contain the power consumption of their GPUs to a pre-determined value. In essence it’s Turbo in reverse: instead of having a low base clockspeed and higher turbo multipliers, AMD is setting a high base clockspeed and letting PowerTune cap GPU performance when it exceeds AMD’s TDP. The net result is that AMD can reduce the dynamic power range of their GPUs by setting high clockspeeds at high voltages to maximize performance, and then letting PowerTune cap GPU performance for the edge cases that cause GPU power consumption to exceed AMD’s preset value.

Advancing Primitives: Dual Graphics Engines & New ROPs PowerTune, Cont
POST A COMMENT

167 Comments

View All Comments

  • anactoraaron - Wednesday, December 15, 2010 - link

    I would like to thank Ryan for the article that makes me forget the "OC card in the review" debacle. Fantastic in depth review with no real slant to team green or red. Critics go elsewhere please. Reply
  • Hrel - Wednesday, December 15, 2010 - link

    When are you guys gonna put all these cards in bench? Some of them have been out for a relatively long time now and they're still not in bench. Please put them in there. Reply
  • ajlueke - Wednesday, December 15, 2010 - link

    I agree with most of the conclusions I have read here. If you already own a 5800 series card, there isn't really enough here to warrant an upgrade. Some improved features and slightly improved FPS in games doesn't quite give the same upgrade incentive as the 5870 did compared a 4870.
    There are some cool things with the 6900 and 6800 series. Looking at the performance in games, the 6970 and even the 6870 seemed to get much closer to 2X performance when placed in crossfire as compared to 5800 series cards. That is a pretty interesting development. All in all, a good upgrade if you didn't buy a card last generation. If you did, it seems the wait is on for the 28 nm version of the GPU.
    Reply
  • Belard - Wednesday, December 15, 2010 - link

    NO!

    The 800 cards were the HIGH end models since the 3000 series and worked well through to the 5000 series with the 5970 being the "odd one" since the "X2" made more sense like the 4850X2.

    It also allows for a "x900" series if needed.

    AMD needs to NOT COPY Nvidia's naming games... did they hire someone from Nvidia? Even the GeForce 580/570 still belong to the 400 series since its the same tech. SHould have been named 490 and the 475... But hey, in 12 months, Nvidia will be up to the 700 series. Hey, Google Chrome is version 8.0 and its been on the market for about 2 years! WTF?!

    What was their excuse again? Oh, to not create confusion with the 5700 series? So they frack up the whole model names for a mid-range card? The 6800's should have been 6700s, simple as that. Yes, there will be some people who will accidentally downgrade.

    What the new 6000 series has going for AMD is that they are somewhat cheaper and easily cost less to make than the 5000s and what Nvidia makes.

    In the end, the 6000 series is the first dumb-thing AMD has done since the 2000 series, but nowhere near as bad.
    Reply
  • MS - Wednesday, December 15, 2010 - link

    In terms of effienct usage of space though AMD is doing quite well; ... should be efficient

    Nice article so far,

    Regards,
    Michael
    Reply
  • nitrousoxide - Wednesday, December 15, 2010 - link

    The power connector on the left (8-pin of 6970 and 6-pin of 6950) has a corner (bottom left corner) cut down, that's because the cooler doesn't fit with the PCB design, if you install it with force the power connector would get stuck. So the delay of 6900 Series could be due to this issue, AMD needs one month to 'manually polish' all power connectors of the stock-cards in order to go with the cooler. Well, just a joke, but this surely reflects how poorly AMD organizes the whole design and manufacture process :) Reply
  • nitrousoxide - Wednesday, December 15, 2010 - link

    you can find this out here :)
    hiphotos. baidu. com/coreavc/pic/item/70f48d81ffe07cf26d811957. jpg
    Reply
  • nitrousoxide - Wednesday, December 15, 2010 - link

    AMD promises that every one will get a unique 6970 or 6950, different from any other card on the planet :) Reply
  • GummiRaccoon - Wednesday, December 15, 2010 - link

    The performance of these cards is much better with 10.12, why didn't you test it with that? Reply
  • Ryan Smith - Wednesday, December 15, 2010 - link

    10.12 does not support the 6900 series.

    8.79.6.2RC2, dated December 7th, were the absolute latest drivers for the 6900 series at the time of publication.
    Reply

Log in

Don't have an account? Sign up now