Introducing PowerTune Technology With Boost

Since the 7970GE’s hardware is identical to the 7970, let’s jump straight into AMD’s software stack.

With the 7970GE AMD is introducing their own form of GPU Boost, which they are calling PowerTune Technology With Boost (PT Boost). PT Boost is a combination of BIOS and Catalyst driver changes that allow AMD to overdrive the GPU when conditions permit, and can be done without any hardware changes.

In practice PT Boost is very similar to NVIDIA’s GPU Boost. Both technologies are based around the concept of a base clock (or engine clock in AMD’s terminology) with a set voltage, and then one or more boost bins with an associated voltage that the GPU can move to as power/thermal conditions permit. In essence PT Boost allows the 7970GE to overvolt and overclock itself to a limited degree.

With that said there are some differences in implementation. First and foremost, AMD isn’t pushing the 7970GE nearly as far with PT Boost as NVIDIA has the GTX 680 with GPU Boost. The 7970GE’s boost clock is 1050MHz, a mere 50MHz better than the base clock, while the GTX 680 can boost upwards of 100MHz over its base clock. So long as both companies go down this path I expect we’ll see the boost clocks move higher and become more important with successive generations, just like what we’ve seen with Intel and their CPU turbo boost, but for the time being GPU turboing is going to be far shallower than what we’ve seen on the CPU.

At the same time however, while AMD isn’t pushing the 7970GE as hard as the GTX 680 they are being much more straightforward in what they guarantee – or as AMD likes to put it they’re being fully deterministic. Every 7970GE can hit 1050MHz and every 7970GE tops out at 1050MHz. This is as opposed to NVIDIA’s GPU Boost, where every card can hit at least the boost clock but there will be some variation in the top clock. No 7970GE will perform significantly better or worse than another on account of clockspeed, although chip-to-chip quality variation means that we should expect to see some trivial performance variation because of power consumption.

On that note it was interesting to see that because of their previous work with PowerTune AMD has far more granularity than NVIDIA when it comes to clockspeeds. GK104’s bins are 13MHz apart; we don’t have an accurate measure for AMD cards because there are so many bins between 1000MHz and 1050MHz that we can’t accurately count them. Nor for that matter does the 7970GE stick with any one bin for very long, as again thanks to PowerTune AMD can switch their clocks and voltages in a few milliseconds as opposed to the roughly 100ms it takes NVIDIA to do the same thing. To be frank in a desktop environment it’s not clear whether this is going to make a practical difference (we’re talking about moving less than 2% in the blink of an eye), but if this technology ever makes it to mobile a fast switching time would be essential to minimizing power consumption.

Such fast switching of course is a consequence of what AMD has done with their internal counters for PowerTune. As a reminder, for PowerTune AMD estimates their power consumption via internal counters that monitor GPU usage and calculate power consumption based on those factors, whereas NVIDIA simply monitors the power going into the GPU. The former is technically an estimation (albeit a precise one), while the latter is accurate but fairly slow, which is why AMD can switch clocks so much faster than NVIDIA can.

For the 7970GE AMD is further refining their PowerTune algorithms in order to account for PT Boost’s voltage changes and to further improve the accuracy of the algorithm. The big change here is that on top of their load based algorithm AMD is adding temperatures into the equation, via what they’re calling Digital Temperature Estimation (DTE). Like the existing PowerTune implementation, DTE is based on internal counters rather than an external sensor (i.e. a thermal diode), with AMD using their internal counters and knowledge about the cooling system to quickly estimate the GPU’s temperature similar to how they estimate power consumption, with a focus on estimating power in and heat out in order to calculate the temperature.

The end result of this is that by estimating the temperature AMD can now estimate the leakage of the chip (remember, leakage is a function of temperature), which allows them to more accurately estimate total power consumption. For previous products AMD has simply assumed the worst case scenario for leakage, which kept real power below AMD’s PowerTune limits but effectively overestimated power consumption. With DTE and the ability to calculate leakage AMD now has a better power estimate and can push their clocks just a bit higher as they can now tap into the headroom that their earlier overestimation left. This alone allows AMD to increase their PT Boost performance by 3-4%, relative to what it would be without DTE.

AMD actually has a longer explanation on how DTE works, and rather than describing it in detail we’ll simply reprint it.

DTE works as a deterministic model of temperature in a worst case environment, as to give us a better estimate of how much current the ASIC is leaking at any point in time. As a first order approximation, ASIC power is roughly a function of: dynamic_power(voltage, frequency) + static_power(temperature, voltage, leakage).

Traditional PowerTune implementations assume that the ASIC is running at a worst case junction temperature, and as such always overestimates the power contribution of leaked current. In reality, even at a worst case ambient temp (45C inlet to the fansink), the GPU will not be working at a worst case junction temperature. By using an estimation engine to better calculate the junction temp, we can reduce this overestimation in a deterministic manner, and hence allow the PowerTune architecture to deliver more of the budget towards dynamic power (i.e. frequency) which results in higher performance. As an end result, DTE is responsible for about 3-4% performance uplift vs the HD7970 GHz Edition with DTE disabled.

The DTE mechanism itself is an iterative differential model which works in the following manner. Starting from a set of initial conditions, the DTE algorithm calculates dTemp_ti/dt based on the inferred power consumption over a previous timeslice (is a function of voltage, workload/capacitance, freq, temp, leakage, etc), and the thermal capacitance of the fansink (function of fansink and T_delta). Simply put, we estimate the heat into the chip and the heat out of the chip at any given moment. Based on this differential relation, it’s easy to work back from your initial conditions and estimate Temp_ti, which is the temperature at any timeslice. A lot of work goes into tuning the parameters around thermal capacitance and power flux, but in the end, you have an algorithmic way to always provide benefit over the previous worst-case assumption, but also to guarantee that it will be representative of the entire population of parts in the market.

We could have easily done this through diode measurements, and used real temperature instead of digital temperature estimates…. But that would not be deterministic. Our current method with DTE guarantees that two parts off the shelf will perform the same way, and we enable end users to take advantage of their extra OC headroom on their parts through tools like Overdrive.

By tapping into this headroom however AMD has also increased their real power consumption at lower temperatures and leakages, which is why despite the identical PowerTune limits the 7970GE will consume more power than the 7970. We’ll get into the numbers in our benchmarking section, but it’s something to keep in mind for the time being.

Finally, on the end-user monitoring front we have some good news and some bad news. The bad news is that for the time being it’s not going to be possible to accurately monitor the real clockspeed of the 7970GE, either through AMD’s control panel or through 3rd party utilities such as GPU-Z. As it stands AMD is only exposing the base P-states but not the intermediate P-states, which goes back to the launch of the 7970 and is why we have never been able to tell if PowerTune throttling is active (unlike the 6900 series). So for the time being we have no idea what the actual (or even average) clockspeed of the 7970GE is. All we can see is whether it’s at its boost P-state – displayed as a 1050MHz core clock – or whether it’s been reduced to its P-state for its base clock, at which point the 7970GE will report 1000MHz.

The good news is that internally of course AMD can take much finer readings (something they made sure to show us at AFDS) and that they’ll finally be exposing these finer readings to 3rd party applications. Unfortunately they haven’t given us an expected date, but soon enough their API will be able to report the real clockspeed of the GPU, allowing users to see the full effects of both PT Boost and PT Throttle. It’s a long overdue change and we’re glad to see AMD is going to finally expose this data.

AMD Radeon HD 7970 GHz Edition Review Dueling Drivers, A PR Do-Over, & The Test
Comments Locked

110 Comments

View All Comments

  • Ammaross - Friday, June 22, 2012 - link

    So, since the 7970 GE is essentially a tweaked OCed 7970, why not include a factory-overclocked nVidia 680 for fairness? There's a whole lot of headroom on those 680s as well that these benches leave untouched and unrepresented.
  • elitistlinuxuser - Friday, June 22, 2012 - link

    Can it run pong and at what frame rates
  • Rumpelstiltstein - Friday, June 22, 2012 - link

    Why is Nvidia red and AMD Green?
  • Galcobar - Friday, June 22, 2012 - link

    Standard graph colouring on Anandtech is that the current product is highlighted in green, specific comparison products in red. The graphs on page 3 for driver updates aren't a standard graph for video card reviews.

    Also, typo noted on page 18 (OC Gaming Performance), the paragraph under the Portal 2 1920 chart: "With Portal 2 being one of the 7970GE’s biggest defEcits" -- deficits
  • mikezachlowe2004 - Sunday, June 24, 2012 - link

    Computer performance is a big factor in deciding in purchase as well and I am disappointed to not see any mention of this in the conclusion. AMD blows nVidia out the water when it comes to compute performance and this should not be taken lightly seeing as games right now are implementing more and more compute capabilities in games and many other things. Compute performance has been growing and growing and today at a rate higher than ever and it is very disappointing to see no mention of this in Anand's conclusion.

    I use autoCAD for work all the time but I also enjoy playing games as well and with a workload like this, AMDs GPU provide a huge advantage over nVidia simply because nVidias GK104 compute performance is no where near that of AMDs. AMD is the obvious choice for someone like me.

    As far as the noise and temps go, I personally feel if your spending $500 on a GPU and obviously thousands on your system there is no reason not tospend a couple hundred on water cooling. Water cooling completely eliminates any concern for temps and noise which should make AMDs card the clear choice. Same goes for power consumption. If you're spending thousands on a system there is no reason you should be worried about a couple extra dollars a month on your bill. This is just how I see it. Now don't get me wrong, nVidia has a great card for gaming, but gaming only. AMD offers the best of both worlds. Both gaming and compute and to me, this makes the 7000 series the clear winner to me.
  • CeriseCogburn - Sunday, June 24, 2012 - link

    It might help if you had a clue concerning what you're talking about.

    " CAD Autodesk with plug-ins are exclusive on Cuda cores Nvidia cards. Going crossfire 7970 will not change that from 5850. Better off go for GTX580."

    " The RADEON HD 7000 series will work with Autodesk Autocad and Revitt applications. However, we recommend using the Firepro card as it has full support for the applications you are using as it has the certified drivers. For the list of compatible certified video cards, please visit http://support.amd.com/us/gpudownload/fire/certifi... "

    nVidia works out of the box, amd does NOT - you must spend thousands on Firepro.

    Welcome to reality, the real one that amd fanboys never travel in.
  • spdrcrtob - Tuesday, July 17, 2012 - link

    It might help if you knew what you are talking about...

    CAD as infer is AutoCAD by Autodesk and it doesn't have any CUDA dedicated plugin's. You are thinking of 3DS Max's method of Rendering called iRay. That's even fairly new from 2011 release.

    There's isn't anything else that uses CUDA processors on a dedicated scale unless its a 3rd Party program or plugin. But not in AutoCAD, AutoCAD barely needs anything. So get it straight.

    R-E-V-I-T ( with one T) requires more as there's rendering engine built in not to mention its mostly worked in as a 3D application, unlike AutoCAD which is mostly used in 2D.

    Going Crossover won't help because most mid-range and high end single GPU's (AMD & NVIDIA) will be fine for ANY surface modeling and/ or 3D Rendering. If you use the application right you can increase performance numbers instead of increasing card count.

    All Autodesk products work with any GPU really, there are supported or "certified" drivers and cards, usually only "CAD" cards like Fire Pro or Quadro's.

    Nvidia's and AMD's work right out of the Box, just depends on the Add In Board partner and build quality NVIDIA fan boy. If you're going to state facts , then get your facts straight where it matters. Not your self thought cute remarks.

    Do more research or don't state something you know nothing about. I have supported CAD and Engineering Environments and the applications they use for 8yrs now, before that 5 yrs more of IT support experience.
  • aranilah - Monday, June 25, 2012 - link

    please put up a graph of the 680 overclocked to its maximum potential versus this to its maximum oc, that would be a different story i believe , not sure though. Please do it because on you 680 review there is no OC testing :/
  • MrSpadge - Monday, June 25, 2012 - link

    - AMDs boost assumes the stock heatsink - how is this affected by custom / 3rd party heat sinks? Will the chip think it's melting, whereas in reality it's crusing along just fine?

    - A simple fix would be to read out the actual temperature diode(s) already present within the chip. Sure, not deterministic.. but AMD could let users switch to this mode for better accuracy.

    - AMD could implement a calibration routine into the control panel to adjust the digital temperature estimation to the atcual heat sink present -> this might avoid the problem altogether.

    - Overvolting just to reach 1.05 GHz? I don't think this is necessary. Actually, I think AMD is generously overvolting most CPUs and some GPUs in the recent years. Some calibration for the actual chip capability would be nice as well - i.e. test if MY GPU really needs more voltage to reach the boost clock.

    - 4 digit product numbers and only fully using 2 of them, plus the 3rd one to a limited extend (only 2 states to distinguish - 5 and 7). This is ridiculous! The numbers are there to indicate performance!!!

    - Bring out cheaper 1.5 GB versions for us number crunchers.

    - Bring an HD7960 with approx. the same amount of shaders as the HD7950, but ~1 GHz clock speeds. Most chips should easily do this.. and AMD could sell the same chip for more, since it would be faster.
  • Hrel - Monday, June 25, 2012 - link

    How can you write a review like this, specifically to test one card against another, then only overclock one of them in the "OC gaming performance" section. Push the GTX680 as far as you can too otherwise those results are completely meaningless; for comparison.

Log in

Don't have an account? Sign up now