• What
    is this?

    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.

    PRESENTED BY

Redefining TDP With PowerTune

One of our fundamental benchmarks is FurMark, oZone3D’s handy GPU load testing tool. The furry donut can generate a workload in excess of anything any game or GPGPU application can do, giving us an excellent way to establish a worst case scenario for power usage, GPU temperatures, and cooler noise. The fact that it was worse than any game/application has ruffled both AMD and NVIDIA’s feathers however, as it’s been known to kill older cards and otherwise make their lives more difficult, leading to the two companies labeling the program a “power virus”.

FurMark is just one symptom of a larger issue however, and that’s TDP. Compared to their CPU counterparts at only 140W, video cards are power monsters. The ATX specification allows for PCIe cards to draw up to 300W, and we quite regularly surpass that when FurMark is in use. Things get even dicier on laptops and all-in-one computers, where compact spaces and small batteries limit how much power a GPU can draw and how much heat can effectively be dissipated. For these reasons products need to be designed to meet a certain TDP; in the case of desktop cards we saw products such as the Radeon HD 5970 where it had sub-5870 clocks to meet the 300W TDP (with easy overvolting controls to make up for it), and in laptop parts we routinely see products with many disabled functional units and low clocks to meet those particularly low TDP requirements.

Although we see both AMD and NVIDIA surpass their official TDP on FurMark, it’s never by very much. After all TDP defines the thermal limits of a system, so if you regularly surpass those limits it can lead to overwhelming the cooling and ultimately risking system damage. It’s because of FurMark and other scenarios that AMD claims that they have to set their products’ performance lower than they’d like. Call of Duty, Crysis, The Sims 3, and other games aren’t necessarily causing video cards to draw power in excess of their TDP, but the need to cover the edge cases like FurMark does. As a result AMD has to plan around applications and games that cause a high level of power draw, setting their performance levels low enough that these edge cases don’t lead to the GPU regularly surpassing its TDP.

This ultimately leads to a concept similar to dynamic range, defined by Wikipedia as: “the ratio between the largest and smallest possible values of a changeable quantity.” We typically use dynamic range when talking about audio and video, referring to the range between quiet and loud sounds, and dark and light imagery respectively. However power draw is quite similar in concept, with a variety of games and applications leading to a variety of loads on the GPU. Furthermore while dynamic range is generally a good thing for audio and video, it’s generally a bad thing for desktop GPU usage – low power utilization on a GPU-bound game means that there’s plenty of headroom for bumping up clocks and voltages to improve the performance of that game. Going back to our earlier example however, a GPU can’t be set this high under normal conditions, otherwise FurMark and similar applications will push the GPU well past TDP.

The answer to the dynamic power range problem is to have variable clockspeeds; set the clocks low to keep power usage down on power-demanding games, and set the clocks high on power-light games. In fact we already have this in the CPU world, where Intel and AMD use their turbo modes to achieve this. If there’s enough thermal and power headroom, these processors can increase their clockspeeds by upwards of several steps. This allows AMD and Intel to not only offer processors that are overall faster on average, but it lets them specifically focus on improving single-threaded performance by pushing 1 core well above its normal clockspeeds when it’s the only core in use.

It was only a matter of time until this kind of scheme came to the GPU world, and that time is here. Earlier this year we saw NVIDIA lay the groundwork with the GTX 500 series, where they implemented external power monitoring hardware for the purpose of identifying and slowing down FurMark and OCCT; however that’s as far as they went, capping only FurMark and OCCT. With Cayman and the 6900 series AMD is going to take this to the next step with a technology called PowerTune.

PowerTune is a power containment technology, designed to allow AMD to contain the power consumption of their GPUs to a pre-determined value. In essence it’s Turbo in reverse: instead of having a low base clockspeed and higher turbo multipliers, AMD is setting a high base clockspeed and letting PowerTune cap GPU performance when it exceeds AMD’s TDP. The net result is that AMD can reduce the dynamic power range of their GPUs by setting high clockspeeds at high voltages to maximize performance, and then letting PowerTune cap GPU performance for the edge cases that cause GPU power consumption to exceed AMD’s preset value.

Advancing Primitives: Dual Graphics Engines & New ROPs PowerTune, Cont
POST A COMMENT

167 Comments

View All Comments

  • henrikfm - Wednesday, December 15, 2010 - link

    The right numbers for these cards considering the performance:

    6970 -> 5875
    6950 -> 5855
    Reply
  • flyck - Wednesday, December 15, 2010 - link

    Anand also tested with 'outdated' drivers. It is ofcourse AMD fault to not supply the best drivers available at launch though. But anand used 10.10, Reviews that use 10.11 like HardOcp see that the 6950 performance equally or better than 570GTx!! and 6970 trades blows with 580GTX but is overall little slower (but faster than 570GTX).

    And now we have to wait for the 10.12 drivers which were meant to be for 69xx series.
    Reply
  • flyck - Wednesday, December 15, 2010 - link

    my bad anand tested with 10.11 :shame:
    10.12 don't seam to improve performance.

    That said, Anand would it be possible to change your graphs?
    Starting with the low quality and ending with the high quality? And also make the high quality chart for single cards only. Now it just isn't readable with SLI and crossfire numbers through it.

    According to your results 6970 is > 570 and 6950~570 but only when everything turned on.. but one cannot deduct that with the current presentation.
    Reply
  • Will Robinson - Wednesday, December 15, 2010 - link

    $740 for HD6970 CrossfireX dominates GTX580 SLI costing over $1000.
    That's some serious ownage right there.
    Good pricing on these new cards and solid numbers for power/heat and noise.
    Seems like a good new series of cards from AMD.
    Reply
  • prdola0 - Wednesday, December 15, 2010 - link

    No, you're wrong. Re-read the graphs. GTX580 SLI wins most of the time. Reply
  • softdrinkviking - Wednesday, December 15, 2010 - link

    By a small average amount, and for ~$250 extra.
    Once you get to that level, you're not really hurting for performance anyway, so for people who really just want to play games and aren't interested in having the "fastest card" just to have it, the 6970 is the best value.
    Reply
  • Nfarce - Wednesday, December 15, 2010 - link

    True. However AMD has just about always been about value over an all out direct card horsepower war with Nvidia. Some people are willing to spend for bragging rights.

    But I'm a little suspect on AT's figures with these cards. Two other tech sites (Toms Hardware and Guru3D) show the GTX 570 and 580 solidly beating the 6950 and 6970 respectively in the same games with similar PC builds.
    Reply
  • IceDread - Friday, December 17, 2010 - link

    You are wrong. HD 5970 in crossfire wins over gtx 580 sli. But anandtech did not test that. Reply
  • ypsylon - Wednesday, December 15, 2010 - link

    A lot of people were anxious to see what AMD will bring to the market with 6950/6970. And once again not much. Some minor advantages (like 5FPS in handul of games) is nothing worth writing or screaming about. For now GTX580 is more expensive, but now with AMD unveiling new cards nVidia will get really serious about the price. That $500 price point won't live for long. I expecting at least 50$ off that in the next 4-6 weeks.

    GTX580 is best option today for someone who is interested in new VGA, if you do own right now 5850/5870/5970 (CF or not) don't even bother with 69[whatever].
    Reply
  • duploxxx - Wednesday, December 15, 2010 - link

    at that price point a 580 the best buy, get lost. The 580 is way over prized for the small performance increase it has above 570-6970 not to mentioning the additional power consumption. Don't see any reason at all to buy that card.

    Indeed no need to upgrade from a 58xx series but neither would be to move to a nv based card.
    Reply

Log in

Don't have an account? Sign up now