Lower Idle Power & Better Overcurrent Protection

One aspect AMD was specifically looking to improve in Cypress over RV770 was idle power usage. The load power usage for RV770 was fine at 160W for the HD4870, but that power usage wasn’t dropping by a great deal when idle – it fell by less than half to 90W. Later BIOS revisions managed to knock a few more watts off of this, but it wasn’t a significant change, and even later designs like RV790 still had limits to their idling abilities by only being able to go down to 60W at idle.

As a consequence, AMD went about designing the Cypress with a much, much lower target in mind. Their goal was to get idle power down to 30W, 1/3rd that of RV770. What they got was even better: they came in past that target by 10%, hitting a final idle power of 27W. As a result the Cypress can idle at 30% of the power as RV770, or as compared to Cypress’s load power of 188W, some 14% of its load power.

Accomplishing this kind of dramatic reduction in idle power usage required several changes. Key among them has been the installation of additional power regulating circuitry on the board, and additional die space on Cypress assigned to power regulation. Notably, all of these changes were accomplished without the use of power-gating to shut down unused portions of the chip, something that’s common on CPUs. Instead all of these changes have been achieved through more exhaustive clock-gating (that is, reducing power consumption by reducing clock speeds), something GPUs have been doing for some time now.

The use of clock-gating is quickly evident when we discuss the idle/2D clock speeds of the 5870, which is 150mhz for the core, and 300mhz for the memory . The idle clock speeds here are significantly lower than the 4870 (550/900), which in the case of the core is the source of its power savings as compared to the 4870. As tweakers who have attempted to manually reduce the idle clocks on RV770 based cards for further power savings have noticed, RV770 actually loses stability in most situations if its core clock drops too low. With the Cypress this has been rectified, enabling it to hit these lower core speeds.

Even bigger however are the enhancements to Cypress’s memory controller, which allow it to utilize a number of power-saving tricks with GDDR5 RAM, along with other features that we’ll get to in a bit. With RV770’s memory controller, it was not capable of taking advantage of very many of GDDR5’s advanced features besides the higher bandwidth abilities. Lacking this full bag of tricks, RV770 and its derivatives were unable to reduce the memory clock speed, which is why the 4870 and other products had such high memory clock speeds even at idle. In turn this limited the reduction in power consumption attained by idling GDDR5 modules.

With Cypress AMD has implemented nearly the entire suite of GDDR5’s power saving features, allowing them to reduce the power usage of the memory controller and the GDDR5 modules themselves. As with the improvements to the core clock, key among the improvement in memory power usage is the ability to go to much lower memory clock speeds, using fast GDDR5 link re-training to quickly switch the memory clock speed and voltage without inducing glitches. AMD is also now using GDDR5’s low power strobe mode, which in turn allows the memory controller to save power by turning off the clock data recovery mechanism. When discussing the matter with AMD, they compared these changes to putting the memory modules and memory controller into a GDDR3-like mode, which is a fair description of how GDDR5 behaves when its high-speed features are not enabled.

Finally, AMD was able to find yet more power savings for Crossfire configurations, and as a result the slave card(s) in a Crossfire configuration can use even less power. The value given to us for an idling slave card is 20W, which is a product of the fact that the slave cards go completely unused when the system is idling. In this state slave cards are still capable of instantaneously ramping up for full-load use, although conceivably AMD could go even lower still by powering down the slave cards entirely at a cost of this ability.

On the opposite side of the ability to achieve such low idle power usage is the need to manage load power usage, which was also overhauled for the Cypress. As a reminder, TDP is not an absolute maximum, rather it’s a maximum based on what’s believed to be the highest reasonable load the card will ever experience. As a result it’s possible in extreme circumstances for the card to need power beyond what its TDP is rated for, which is a problem.

That problem reared its head a lot for the RV770 in particular, with the rise in popularity of stress testing programs like FurMark and OCCT. Although stress testers on the CPU side are nothing new, FurMark and OCCT heralded a new generation of GPU stress testers that were extremely effective in generating a maximum load. Unfortunately for RV770, the maximum possible load and the TDP are pretty far apart, which becomes a problem since the VRMs used in a card only need to be spec’d to meet the TDP of a card plus some safety room. They don’t need to be able to meet whatever the true maximum load of a card can be, as it should never happen.

Why is this? AMD believes that the instruction streams generated by OCCT and FurMark are entirely unrealistic. They try to hit everything at once, and this is something that they don’t believe a game or even a GPGPU application would ever do. For this reason these programs are held in low regard by AMD, and in our discussions with them they referred to them as “power viruses”, a term that’s normally associated with malware. We don’t agree with the terminology, but in our testing we can’t disagree with AMD about the realism of their load – we can’t find anything that generates the same kind of loads as OCCT and FurMark.

Regardless of what AMD wants to call these stress testers, there was a real problem when they were run on RV770. The overcurrent situation they created was too much for the VRMs on many cards, and as a failsafe these cards would shut down to protect the VRMs. At a user level shutting down like this isn’t a very helpful failsafe mode. At a hardware level shutting down like this isn’t enough to protect the VRMs in all situations. Ultimately these programs were capable of permanently damaging RV770 cards, and AMD needed to do something about it. For RV770 they could use the drivers to throttle these programs; until Catalyst 9.8 they detected the program by name, and since 9.8 they detect the ratio of texture to ALU instructions (Ed: We’re told NVIDIA throttles similarly, but we don’t have a good control for testing this statement). This keeps RV770 safe, but it wasn’t good enough. It’s a hardware problem, the solution needs to be in hardware, particularly if anyone really did write a power virus in the future that the drivers couldn’t stop, in an attempt to break cards on a wide scale.

This brings us to Cypress. For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.

By implementing this at the hardware level, Cypress cards are fully protected against all possible overcurrent situations, so that it’s not possible for any program (OCCT, FurMark, or otherwise) to damage the hardware by generating too high of a load. This also means that the protections at the driver level are not needed, and we’ve confirmed with AMD that the 5870 is allowed to run to the point where it maxes out or where overcurrent protection kicks in.

On that note, because card manufacturers can use different VRMs, it’s very likely that we’re going to see some separation in performance on FurMark and OCCT based on the quality of the VRMs. The cheapest cards with the cheapest VRMs will need to throttle the most, while luxury cards with better VRMs would need to throttle little, if at all. This should make little difference in stock performance on real games and applications (since as we covered earlier, we can’t find anything that pushes a card to excess) but it will likely make itself apparent in overclocking. Overclocked cards - particularly those with voltage modifications – may hit throttle situations in normal applications, which means the VRMs will make a difference here. It also means that overclockers need to keep an eye on clock speeds, as the card shutting down is no longer a tell-tale sign that you’re pushing it too hard.

Finally, while we’re discussing the monitoring chip, we may as well talk about the rest of its features. Along with monitoring the GPU, it also is a PWM controller. This means that the PWM controller is no longer a separate part that card builders add themselves, and as such we won’t be seeing any cards using a 2pin fixed speed fan to save money on the PWM controller. All Cypress cards (and presumably, all derivatives) will have the ability to use a 4pin fan built-in.

The Race is Over: 8-channel LPCM, TrueHD & DTS-HD MA Bitstreaming More GDDR5 Technologies: Memory Error Detection & Temperature Compensation
POST A COMMENT

327 Comments

View All Comments

  • mapesdhs - Saturday, September 26, 2009 - link


    MODel3 writes:
    > 1.Geometry/vertex performance issues ...
    > 2.Geometry/vertex shading performance issues ...

    Would perhaps some of the subtests in 3DMark06 be able to test this?
    (not sure about Vantage, never used that yet) Though given what Jarred
    said about the bandwidth and other differences, I suppose it's possible
    to observe large differences in synthetic tests which are not the real
    cause of a performance disparity.

    The trouble with heavy GE tests is, they often end up loading the fill
    rates anyway. I've run into this problem with the SGI tests I've done
    over the years:

    http://www.sgidepot.co.uk/sgi.html">http://www.sgidepot.co.uk/sgi.html

    The larger landscape models used in the Inventor tests are a good
    example. The points models worked better in this regard for testing
    GE speed (stars3/star4), but I don't know to what extent modern PC
    gfx is designed to handle points modelling - probably works better
    on pro cards. Actually, Inventor wasn't a good choice anyway as it's
    badly CPU-bound and API-heavy (I should have used Performer, gives
    results 5 to 10X faster).

    Anyway, point is, synthetic tests might allow one to infer that one
    aspect of the gfx pipeline is a bottleneck when infact it isn't.

    Ages ago I emailed NVIDIA (Ujesh, who I used to know many moons ago,
    but alas he didn't reply) asking when, if ever, they would add
    performance counters and other feedback monitors to their gfx
    products so that applications could tell what was going on in the
    gfx pipeline. SGI did this ages years ago, which allowed systems like
    IR to support impressive functions such as Dynamic Video Resizing by
    being able to monitor frame by frame what was going on within the gfx
    engine at each stage. Try loading any 3D model into perfly, press F1
    and click on 'Gfx' in the panel (Linux systems can run Performer), eg.:

    http://www.sgidepot.co.uk/misc/perfly.gif">http://www.sgidepot.co.uk/misc/perfly.gif

    Given how complex modern PC gfx has become, it's always been a
    mystery to me why such functions haven't been included long ago.
    Indeed, for all that Crysis looks amazing, I was never that keen on
    it being used as a benchmark since there was no way of knowing
    whether the performance hammering it created was due to a genuinely
    complex environment or just an inefficient gfx engine. There's still
    no way to be sure.

    If we knew what was happening inside the gfx system, we could easily
    work out why performance differences for different apps/games crop
    up the way they do. And I would have thought that feedback monitors
    within the gfx pipe would be even more useful to those using
    professional applications, just as it was for coders working on SGI
    hardware in years past.

    Come to think of it, how do NVIDIA/ATI even design these things
    without being able to monitor what's going on? Jarred, have you ever
    asked either company about this?

    Ian.

    Reply
  • JarredWalton - Saturday, September 26, 2009 - link

    I haven't personally, since I'm not really the GPU reviewer here. I'd assume most of their design comes from modeling what's happening, and with knowledge of their architecture they probably have utilities that help them debug stuff and figure out where stalls and bottlenecks are occurring. Or maybe they don't? I figure we don't really have this sort of detail for CPUs either, because we have tools that know the pipeline and architecture and they can model how the software performs without any hardware feedback. Reply
  • MODEL3 - Thursday, October 01, 2009 - link

    I checked the web for synthetic geometry tests.
    Sadly i only found 3dMark Vantage tests.
    You can't tell much from them, but they are indicative.

    Check:

    http://www.pcper.com/article.php?aid=783&type=...">http://www.pcper.com/article.php?aid=783&type=...

    GPU Cloth: 5870 is only 1,2X faster than 4890. (vertex/geometry shading test)
    GPU Particles: 5870 is only 1,2X faster than 4890. (vertex/geometry shading test)

    Perlin Noise: 5870 is 2,5X faster than 4890. (Math-heavy Pixel Shader test)
    Parallax Occlusion Mapping: 5870 is 2,1X faster than 4890. (Complex Pixel Shader test)

    All the above 4 tests are not bandwidth limited at all.
    Just for example, if you check:

    http://www.pcper.com/article.php?aid=674&type=...">http://www.pcper.com/article.php?aid=674&type=...

    You will see that a 750MHz 4870 512MB is 20-23% faster than a 625MHz 4850 in all the above 4 tests, so the extra bandwidth (115,2GB/s vs 64GB/s) it doesn't help at all.
    But 4850 is extremely bandwidth limited in the color fillrate test (4870 is 60% faster than 4850)

    Also it shouldn't be a problem of the dual rasterizer/dual SIMDs engine efficiency since synthetic Pixel Shader tests is fine (more than 2X) while the synthetic geometry shading tests is only 1,2X.

    My guess is ATI didn't improve the classic geometry set-up engine and the GS because they want to promote vertex/geometry techniques based on the DX11 tesselator from now on.
    Reply
  • Zool - Friday, September 25, 2009 - link

    In Dx11 the fixed tesselation units will do much finer geometry details for much less memmory space and on chip so i think there isnt a single problem with that. Also the compute shader need minimal memory bandwith and can utilize plenty of idle shaders. The card is designed with dx11 in mind and it isnt using the wholle pipeline after all. I wouldnt make too early conclusions.(I think the perfomance will be much better after few drivers)

    Reply
  • MODEL3 - Saturday, September 26, 2009 - link

    The DX11 tesselator in order to be utilized must the game engine to take advantage of it.
    I am not talking about the tesselator.
    I am talking about the classic Geometry unit (DX9/DX10 engines) and the Geometry Shader [GS] (DX10 engines only).

    I'll check to see if i can find a tech site that has synthetic bench for Geometry related perf. and i will post again tomorrow, if i can find anything.

    Reply
  • JarredWalton - Friday, September 25, 2009 - link

    It's worth noting that when you factor in clock speeds, compared to the 5870 the 4870X2 offers 88% of the core performance and 50% more bandwidth. Some algorithms/games require more bandwidth and others need more core performance, but it's usually a combination of the two. The X2 also has CrossFire inefficiencies to deal with.

    More interesting perhaps is that the GTX 295 offers (by my estimates, which admittedly are off in some areas) roughly 10% more GPU shader performance, about 18.5% more fill rate, and 46% more bandwidth than the HD 5870. The fact that the HD 4870 is still competitive is a good sign that ATI is getting good use of their 5 SPs per Stream Processor design, and that they are not memory bandwidth limited -- at least not entirely.
    Reply
  • SiliconDoc - Wednesday, September 30, 2009 - link

    The 4870x2 has somewhere around "double the data paths" in and out of it's 2 cpu's. So what you have with the 5870 putting as some have characterized " 2x 770 cores melded into one " is DOUBLE THE BOTTLENECK in and out of the core.
    They tried to compensate with ddr5 1200/4800 - but the fact remains, they only get so much with that "NOT ENOUGH DATA PATHS/PINS in and out of that gpu core."
    Reply
  • cactusdog - Friday, September 25, 2009 - link

    Omg these cards look great. Lol Silicon Doc is so gutted and furious he is making hmself look like a dam fool again only this time he should be on suicide watch...Nvidia cards are now obsolete..LOL. Reply
  • mapesdhs - Friday, September 25, 2009 - link


    Hehe, indeed. Have you ever seen a scifi series called, "They Came
    From Somewhere Else?" S.D.'s getting so worked up, reminds me of
    the scene where the guy's head explodes. :D

    Hmm, that's an alternative approach I suppose in place of post
    moderation. Just get someone so worked up about something they'll
    have an aneurism and pop their clogs... in which case, I'll hand
    it back to Jarred. *grin*

    Ian.

    Reply
  • SiliconDoc - Friday, September 25, 2009 - link

    That is quite all right, you fellas make sure to read it all, I am more than happy that the truth is sinking into your gourds, you won't be able to shake it.
    I am very happy about it.
    Reply

Log in

Don't have an account? Sign up now