Lower Idle Power & Better Overcurrent Protection

One aspect AMD was specifically looking to improve in Cypress over RV770 was idle power usage. The load power usage for RV770 was fine at 160W for the HD4870, but that power usage wasn’t dropping by a great deal when idle – it fell by less than half to 90W. Later BIOS revisions managed to knock a few more watts off of this, but it wasn’t a significant change, and even later designs like RV790 still had limits to their idling abilities by only being able to go down to 60W at idle.

As a consequence, AMD went about designing the Cypress with a much, much lower target in mind. Their goal was to get idle power down to 30W, 1/3rd that of RV770. What they got was even better: they came in past that target by 10%, hitting a final idle power of 27W. As a result the Cypress can idle at 30% of the power as RV770, or as compared to Cypress’s load power of 188W, some 14% of its load power.

Accomplishing this kind of dramatic reduction in idle power usage required several changes. Key among them has been the installation of additional power regulating circuitry on the board, and additional die space on Cypress assigned to power regulation. Notably, all of these changes were accomplished without the use of power-gating to shut down unused portions of the chip, something that’s common on CPUs. Instead all of these changes have been achieved through more exhaustive clock-gating (that is, reducing power consumption by reducing clock speeds), something GPUs have been doing for some time now.

The use of clock-gating is quickly evident when we discuss the idle/2D clock speeds of the 5870, which is 150mhz for the core, and 300mhz for the memory . The idle clock speeds here are significantly lower than the 4870 (550/900), which in the case of the core is the source of its power savings as compared to the 4870. As tweakers who have attempted to manually reduce the idle clocks on RV770 based cards for further power savings have noticed, RV770 actually loses stability in most situations if its core clock drops too low. With the Cypress this has been rectified, enabling it to hit these lower core speeds.

Even bigger however are the enhancements to Cypress’s memory controller, which allow it to utilize a number of power-saving tricks with GDDR5 RAM, along with other features that we’ll get to in a bit. With RV770’s memory controller, it was not capable of taking advantage of very many of GDDR5’s advanced features besides the higher bandwidth abilities. Lacking this full bag of tricks, RV770 and its derivatives were unable to reduce the memory clock speed, which is why the 4870 and other products had such high memory clock speeds even at idle. In turn this limited the reduction in power consumption attained by idling GDDR5 modules.

With Cypress AMD has implemented nearly the entire suite of GDDR5’s power saving features, allowing them to reduce the power usage of the memory controller and the GDDR5 modules themselves. As with the improvements to the core clock, key among the improvement in memory power usage is the ability to go to much lower memory clock speeds, using fast GDDR5 link re-training to quickly switch the memory clock speed and voltage without inducing glitches. AMD is also now using GDDR5’s low power strobe mode, which in turn allows the memory controller to save power by turning off the clock data recovery mechanism. When discussing the matter with AMD, they compared these changes to putting the memory modules and memory controller into a GDDR3-like mode, which is a fair description of how GDDR5 behaves when its high-speed features are not enabled.

Finally, AMD was able to find yet more power savings for Crossfire configurations, and as a result the slave card(s) in a Crossfire configuration can use even less power. The value given to us for an idling slave card is 20W, which is a product of the fact that the slave cards go completely unused when the system is idling. In this state slave cards are still capable of instantaneously ramping up for full-load use, although conceivably AMD could go even lower still by powering down the slave cards entirely at a cost of this ability.

On the opposite side of the ability to achieve such low idle power usage is the need to manage load power usage, which was also overhauled for the Cypress. As a reminder, TDP is not an absolute maximum, rather it’s a maximum based on what’s believed to be the highest reasonable load the card will ever experience. As a result it’s possible in extreme circumstances for the card to need power beyond what its TDP is rated for, which is a problem.

That problem reared its head a lot for the RV770 in particular, with the rise in popularity of stress testing programs like FurMark and OCCT. Although stress testers on the CPU side are nothing new, FurMark and OCCT heralded a new generation of GPU stress testers that were extremely effective in generating a maximum load. Unfortunately for RV770, the maximum possible load and the TDP are pretty far apart, which becomes a problem since the VRMs used in a card only need to be spec’d to meet the TDP of a card plus some safety room. They don’t need to be able to meet whatever the true maximum load of a card can be, as it should never happen.

Why is this? AMD believes that the instruction streams generated by OCCT and FurMark are entirely unrealistic. They try to hit everything at once, and this is something that they don’t believe a game or even a GPGPU application would ever do. For this reason these programs are held in low regard by AMD, and in our discussions with them they referred to them as “power viruses”, a term that’s normally associated with malware. We don’t agree with the terminology, but in our testing we can’t disagree with AMD about the realism of their load – we can’t find anything that generates the same kind of loads as OCCT and FurMark.

Regardless of what AMD wants to call these stress testers, there was a real problem when they were run on RV770. The overcurrent situation they created was too much for the VRMs on many cards, and as a failsafe these cards would shut down to protect the VRMs. At a user level shutting down like this isn’t a very helpful failsafe mode. At a hardware level shutting down like this isn’t enough to protect the VRMs in all situations. Ultimately these programs were capable of permanently damaging RV770 cards, and AMD needed to do something about it. For RV770 they could use the drivers to throttle these programs; until Catalyst 9.8 they detected the program by name, and since 9.8 they detect the ratio of texture to ALU instructions (Ed: We’re told NVIDIA throttles similarly, but we don’t have a good control for testing this statement). This keeps RV770 safe, but it wasn’t good enough. It’s a hardware problem, the solution needs to be in hardware, particularly if anyone really did write a power virus in the future that the drivers couldn’t stop, in an attempt to break cards on a wide scale.

This brings us to Cypress. For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.

By implementing this at the hardware level, Cypress cards are fully protected against all possible overcurrent situations, so that it’s not possible for any program (OCCT, FurMark, or otherwise) to damage the hardware by generating too high of a load. This also means that the protections at the driver level are not needed, and we’ve confirmed with AMD that the 5870 is allowed to run to the point where it maxes out or where overcurrent protection kicks in.

On that note, because card manufacturers can use different VRMs, it’s very likely that we’re going to see some separation in performance on FurMark and OCCT based on the quality of the VRMs. The cheapest cards with the cheapest VRMs will need to throttle the most, while luxury cards with better VRMs would need to throttle little, if at all. This should make little difference in stock performance on real games and applications (since as we covered earlier, we can’t find anything that pushes a card to excess) but it will likely make itself apparent in overclocking. Overclocked cards - particularly those with voltage modifications – may hit throttle situations in normal applications, which means the VRMs will make a difference here. It also means that overclockers need to keep an eye on clock speeds, as the card shutting down is no longer a tell-tale sign that you’re pushing it too hard.

Finally, while we’re discussing the monitoring chip, we may as well talk about the rest of its features. Along with monitoring the GPU, it also is a PWM controller. This means that the PWM controller is no longer a separate part that card builders add themselves, and as such we won’t be seeing any cards using a 2pin fixed speed fan to save money on the PWM controller. All Cypress cards (and presumably, all derivatives) will have the ability to use a 4pin fan built-in.

The Race is Over: 8-channel LPCM, TrueHD & DTS-HD MA Bitstreaming More GDDR5 Technologies: Memory Error Detection & Temperature Compensation
Comments Locked

327 Comments

View All Comments

  • SiliconDoc - Wednesday, September 30, 2009 - link

    No, it's the fact you tell LIES, and always in ati's favor, and you got caught, over and over again.
    That is WHAT HAS HAPPENED.
    Now you catch hold of your senses for a moment, and supposedly all the crap you spewed is "ok".
  • SiliconDoc - Friday, September 25, 2009 - link

    Once again, all that matters to YOU, is YOUR games for PC, and ONLY top sellers, and only YOUR OPINION on PhysX.
    However, after you claimed only 2 games, you went on to bloviate about Havok.
    Now you've avoided entirely that issue. Am I to assume, as you have apparently WISHED and thrown about, that HAVOK does not function on NVidia cards? NO QUITE THE CONTRARY !
    --
    What is REAL, is that NVidia runs Havok AND PhysX just fine, and not only that but ATI DOES NOT.
    Now, instead of supporting BOTH, you have singled out your object of HATRED, and spewed your infantile rants, your put downs, your empty comparisons (mere statements), then DEMAND that I show PhysX is worthwhile, with "golden sellers". LOL
    It has been 1.5 years or so since Aegia acquisition, and of course, game developers turning anything out in just 6 short months are considered miracle workers.
    The real problem oif course for you is ATI does not support PhysX, and when a rouge coder made it happen, NVidia supported him, while ATI came in and crushed the poor fella.
    So much for "competition", once again.
    Now, I'd demand you show where HAVOK is worthwhile, EXCEPT I'm not the type of person that slams and reams and screams against " a percieved enemy company" just because "my favorite" isn't capable, and in that sense, my favorite IS CAPABLE.
    Now, PhysX is awesome, it's great, it's the best there is, and that may or may not change, but as for now, NO OTHER demonstrations (you tube and otherwise) can match it.
    That's just a sad fact for you, and with so many maintaining your biased and arrogant demand for anything else, we may have another case of VHS instead of BETA, which of course, you would heartily celebrate, no matter how long it takes to get there.
    LOL
    Yes, it is funny. It's just hilarious. A few months ago before Mirror's Edge and Anand falling in love with PhysX in it, admittedly, in the article he posted, we had the big screamers whining ZERO.
    Well, now a few months later you are whining TWO.
    Get ready to whine higher. Yes, you have read about the uptick in support ? LOL
    You people are really something.
    Oh, I know, CUDA is a big fat zero according to you, too.
    (please pass along your thoughts to higher education universities here in the USA, and the federal government national lab research facilites. Thanks)
  • SiliconDoc - Thursday, September 24, 2009 - link

    Yes, another excuse monger. So you basically admit the text is biased, and claim all readers should see the charts and go by those. LOL
    So when the text is biased, as you admit, how is it that the rest, the very core of the review is not ? You won't explain that either.
    Furthermore, the assumption that competition leads to something better in technology for videocards quicker, fails the basic test that in terms of technology, there is a limit to how fast it proceeds forward, since scientific breakthroughs must come, and often don't come, for instance, new energy technologies, still struggling after decades to make a breakthrough, with endless billions spent, and not much to show for it.
    Same here with videocards, there is a LIMIT to the advancement speed, and competition won't be able to exceed that limit.
    Furthermore, I NEVER said prices won't be driven down by competition, and you falsely asserted that notion to me.
    I DID however say, ATI ALSO IS KNOWN FOR OVERPRICING. (or rather unknown by the red fans, huh, even said by omission to have NOT COMMITTED that "huge sin", that you all blame only Nvidia for doing.)
    So you're just WRONG once again.
    Begging the other guy to "not argue" then mischaracterizing a conclusion from just one of my statements, ignoring the points made that prove your buddy wrong period, and getting the body of your idea concerning COMPETITION incorrect due to technological and scientific constraints you fail to include, is no way to "argue" at all.
    I sure wish there was someone who could take on my points, but so far none of you can. Every time you try, more errors in your thinking are easily exposed.
    A MONOPOLY, in let's take for instance, the old OIL BARRONS, was not stagnant, and included major advances in search and extraction, as Standard Oil history clearly testifies to.
    Once again, the "pat" cleche' is good for some things ( for instance competing drug stores, for example ), or other such things that don't involve inaccesible technology that has not been INVENTED yet.
    The fact that your simpleton attitude failed to note such anomolies, is clearly evidence that once again, thinking "is not reuired" for people like you.
    Once again, the rebuttal has failed.
  • kondor999 - Thursday, September 24, 2009 - link

    This is just sad, and I'm no fanboy. I really wanted a 5870, but only with 100% more speed than a GTX285 - not a lousy 33%. Definitely not worth me upgrading, so I guess ATI saved me some money. I'm certain that my 3 GTX280's in Tri-SLI will destroy 2 5870's in CF - although with slightly less compatability (an important advantage for ATI, but not nearly enough).
  • Moricon - Thursday, September 24, 2009 - link

    I have been a regular at Tomshardware for a while now, nad keep coming back to Anandtech time and again to read reviews I have already read on other sites, and this one is by far the best I have read so far, (guru3d, toms, firing squad, and many others)

    The 5870 looks awesome, but from an upgrade point of view, I guess my system will not really benefit from moving on from E7200 @3.8ghz 4gb 1066, HD4870 @850mhz 4400mhz on 1680x1050.

    Such a shame that i dont have a larger monitor at the moment or I would have jumped immediately.

    Looks like the path is q9550 and 5870 and 1920x1200 monitor or larger to make sense, then might as well go i7, i5, where do you stop..

    Well done ATI, well done! But if history follows the Nvidia 3xx chip will be mindblowing compared!
  • djc208 - Thursday, September 24, 2009 - link

    I was most surprised at how far behind the now 2-generation old 3870 is now (at least on these high-end games). Guess my next upgrade (after a SSD) should be a 5850 once the frenzy dies away.
  • JonnyDough - Thursday, September 24, 2009 - link

    They could probably use a 1.5 GB card. :(
  • mapesdhs - Wednesday, September 23, 2009 - link


    Ryan, any chance you could run Viewperf or other pro-app benchmarks
    please? Some professionals use consumer hardware as a cheap way of
    obtaining reasonable performance in apps like Maya, 3DS Max, ProE,
    etc., so it would most interesting to know how the 5870 behaves when
    running such tests, how it compares to Quadro and FireGL cards.
    Pro-series boards normally have better performance for ops such as
    antialiases lines via different drivers and/or different internal
    firmware optimisations. Someday I figure perhaps a consumer card will
    be able to match a pro card purely by accident.

    Ian.

  • AmdInside - Wednesday, September 23, 2009 - link

    Sorry if this has already been asked but does the 5870 support audio over Display Port? I am holding out for a card that does such a thing. I know it does it for HDMI but also want it to do it for Display Port.
  • VooDooAddict - Wednesday, September 23, 2009 - link

    Been waiting for a single gaming class card that can power more then 2 displays for quite some time. (The more then 2 monitors not necessarily for gaming.)

    The fact that this performs a noticeable bit better then my existing 4870 512MB is a bonus.

Log in

Don't have an account? Sign up now