Keeping It Cool: Transistors, Throttles, and Coolers

Beyond the specific architectural improvements for GF110 we previously discussed, NVIDIA has also been tinkering with their designs at a lower level to see what they could do to improve their performance in conjunction with TSMC’s 40nm manufacturing process. GF100/GTX480 quickly gathered a reputation as a hot product, and this wasn’t an unearned reputation. Even with an SM fused off, GTX 480 already had a TDP of 250W, and the actual power draw could surpass that in extreme load situations such as FurMark.

NVIDIA can (and did) tackle things on the cooling side of things by better dissipating that heat, but keeping their GPUs from generating it in the first place was equally important. This was especially important if they wanted to push high-clocked fully-enabled designs on to the consumer GeForce and HPC Tesla markets, with the latter in particular not being a market where you can simply throw more cooling at the problem. As a result NVIDIA had to look at GF110 at a transistor level, and determine what they could do to cut power consumption.

Semiconductors are a near-perfect power-to-heat conversion device, so a lot of work goes in to getting as much work done with as little power as necessary. This is compounded by the fact that dynamic power (which does useful work) only represents some of the power used – the rest of the power is wasted as leakage power. In the case of a high-end GPU NVIDIA doesn’t necessarily want to reduce dynamic power usage and have it impact performance, instead they want to go after leakage power. This in turn is compounded by the fact that leaky transistors and high clocks are strange bedfellows, making it difficult to separate the two. The result is that leaky transistors are high-clocking transistors, and vice versa.


A typical CMOS transitor: Thin gate dielectrics lead to leakage

Thus the trick to making a good GPU is to use leaky transistors where you must, and use slower transistors elsewhere. This is exactly what NVIDIA did for GF100, where they primarily used 2 types of transistors differentiated in this manner. At a functional unit level we’re not sure which units used what, but it’s a good bet that most devices operating on the shader clock used the leakier transistors, while devices attached to the base clock could use the slower transistors. Of course GF100 ended up being power hungry – and by extension we assume leaky anyhow – so that design didn’t necessarily work out well for NVIDIA.

For GF110, NVIDIA included a 3rd type of transistor, which they describe as having “properties between the two previous ones”. Or in other words, NVIDIA began using a transistor that was leakier than a slow transistor, but not as leaky as the leakiest transistors in GF100. Again we don’t know which types of transistors were used where, but in using all 3 types NVIDIA ultimately was able to lower power consumption without needing to slow any parts of the chip down. In fact this is where virtually all of NVIDIA’s power savings come from, as NVIDIA only outright removed few if any transistors considering that GF110 retains all of GF100’s functionality.

Of course reducing leakage is one way to reduce power consumption, but it doesn’t solve NVIDIA’s other problems in hitting their desired TDP. Both NVIDIA and AMD base their GPU TDP specifications around “real world” applications and games, with NVIDIA largely viewed to be more aggressive on this front. In either case load-generating programs like FurMark and OCCT do not exist in AMD or NVIDIA’s worlds, leading both companies to greatly despise these programs and label them as “power viruses” and other terms.

After a particularly rocky relationship with FurMark blowing up VRMs on the Radeon 4000 series, AMD instituted safeties in their cards with the 5000 series to protect against FurMark – AMD monitored the temperature of the VRMs, and would immediately downclock the GPU if the VRM temperatures exceeded specifications. Ultimately as this was temperature based AMD’s cards were allowed to run to the best of their capabilities, so long as they weren’t going to damage themselves. In practice we rarely encountered AMD’s VRM protection even with FurMark except in overclocking scenarios, where overvolting cards such as the 5970 quickly drove up the temperature of the VRMs.

For GTX 580 NVIDIA is taking an even more stringent approach than AMD, as they’ll be going after power consumption itself rather than just focusing on protecting the card. Attached to GTX 580 are a series of power monitoring chips, which monitor the amount of power the card is drawing from the PCIe slot and PCIe power plugs. By collecting this information NVIDIA’s drivers can determine if the card is drawing too much power, and slow the card down to keep it within spec. This kind of power throttling is new for GPUs, though it’s been common with CPUs for a long time.

NVIDIA’s reasoning for this change doesn’t pull any punches: it’s to combat OCCT and FurMark. At an end-user level FurMark and OCCT really can be dangerous – even if they can’t break the card any longer, they can still cause other side-effects by drawing too much power from the PSU. As a result having this protection in place more or less makes it impossible to toast a video card or any other parts of a computer with these programs.  Meanwhile at a PR level, we believe that NVIDIA is tired of seeing hardware review sites publish numbers showcasing GeForce products drawing exorbitant amounts of power even though these numbers represent non-real-world scenarios. By throttling FurMark and OCCT like this, we shouldn’t be able to get their cards to pull so much power. We still believe that tools like FurMark and OCCT are excellent load-testing tools for finding a worst-case scenario and helping our readers plan system builds with those scenarios in mind, but at the end of the day we can’t argue that this isn’t a logical position for NVIDIA.


Power Monitoring Chips Identified

While this is a hardware measure the real trigger is in software. FurMark and OCCT are indeed throttled, but we’ve been able to throw other programs at the GTX 580 that cause a similar power draw. If NVIDIA was actually doing this all in hardware everything would be caught, but clearly it’s not. For the time being this simplifies everything – you need not worry about throttling in anything else whatsoever – but there will be ramifications if NVIDIA actually uses the hardware to its full potential.

Much like GDDR5 EDC complicated memory overclocking, power throttling would complicate overall video card overclocking, particularly since there’s currently no way to tell when throttling kicks in. On AMD cards the clock drop is immediate, but on NVIDIA’s cards the drivers continue to report the card operating at full voltage and clocks. We suspect NVIDIA is using a NOP or HLT-like instruction here to keep the card from doing real work, but the result is that it’s completely invisible even to enthusiasts. At the moment it’s only possible to tell if it’s kicking in if an application’s performance is too low. It goes without saying that we’d like to have some way to tell if throttling is kicking in if NVIDIA fully utilizes this hardware.

Finally, with average and maximum power consumption dealt with, NVIDIA turned to improving cooling on the GTX to bring temperatures down and to more quietly dissipate heat. GTX 480 not only was loud, but it had an unusual cooling design that while we’re fine with, ended up raising eyebrows elsewhere. Specifically NVIDIA had heatpipes sticking out of the GTX 480, an exposed metal grill over the heatsink, and holes in the PCB on the back side of the blower to allow it to breathe from both sides. Considering we were dissipating over 300W at times it was effective, but apparently not a design NVIDIA liked.

So for GTX 580 NVIDIA has done a lot of work under the hood to produce a card that looks less like the GTX 480 and more like the all-enclosed coolers we saw with the GTX 200 series; the grill, external heatpipes, and PCB ventilation holes are all gone from the GTX 580, and no one would hold it against you to mistake it for a GTX 285. The biggest change in making this possible is NVIDIA’s choice of heatsink: NVIDIA has ditched traditional heatpipes and gone to the new workhorse of vapor chamber cooling.


A Vapor Chamber Cooler In Action (Courtesy NVIDIA)

 


The GTX 580's Vapor Chamber + Heatsink

Vapor chamber coolers have been around for quite some time as aftermarket/custom coolers, and are often the signature design element for Sapphire; it was only more recently with the Radeon HD 5970 that we saw one become part of a reference GPU design. NVIDIA has gone down the same road and is now using a vapor chamber for the reference GTX 580 cooler. Visually this means the heatpipes are gone, while internally this should provide equal if not better heat conduction between the GPU’s heatspreader and the aluminum heatsink proper. The ultimate benefit from this being that with better heat transfer it’s not necessary to run the blower so hard to keep the heatsink cooler in order to maximize the temperature difference between the heatsink and GPU.

NVIDIA’s second change was to the blower itself, which is the source of all noise. NVIDIA found that the blower on the GTX 480 was vibrating against itself, producing additional noise and in particular the kind of high-pitch whining that makes a cooler come off as noisy. As a result NVIDIA has switched out the blower for a slightly different design that keeps a ring of plastic around the top, providing more stability. This isn’t a new design – it’s on all of our Radeon HD 5800 series cards – but much like the vapor chamber this is the first time we’ve seen it on an NVIDIA reference card.

Top: GTX 480 Blower. Bottom: GTX 580 Blower

Finally, NVIDIA has also tinkered with the shape of the shroud encasing the card for better airflow. NVIDIA already uses a slightly recessed shroud near the blower in order to allow some extra space between it and the next card, but they haven’t done anything with the overall shape until now. Starting with the GTX 580, the shroud is now slightly wedge-shaped between the blower and the back of the card; this according to NVIDIA improves airflow in SLI setups where there’s a case fan immediately behind the card by funneling more fresh air in to the gap between cards.

GF110: Fermi Learns Some New Tricks Meet the GTX 580
Comments Locked

160 Comments

View All Comments

  • spigzone - Tuesday, November 9, 2010 - link

    Any 'bandwagon' here belongs to Nvidia.
  • mac2j - Tuesday, November 9, 2010 - link

    Actually the new ATI naming makes a bit more sense.

    Its not a new die shrink but the 6xxx all do share some features not found at all in the 5xxx series such as Displayport 1.2 (which could become very important if 120 and 240Hz monitors ever catch on).

    Also the Cayman 69xx parts are in fact a significantly original design relative to the 58xx parts.

    Nvidia to me is the worst offender ... cause a 580 is just fully-enabled 480 with the noise and power problems fixed.
  • Sihastru - Tuesday, November 9, 2010 - link

    If you think that stepping up the spec on the output ports warrants skipping a generation when naming your product, see that mini-HDMI port on the 580, that's HDMI 1.4 compliant... the requirements for 120Hz displays are met.

    The GF110 in not a GF100 with all the shaders enabled. It looks that way to the uninitiated. GF110 has much more in common with GF104.

    GF110 has three types of tranzistors, graded by leakage, while the GF100 has just two. This gives you the ability to clock the core higher, while having a lower TDP. It is smaller in size then GF100 is, while maintaining the 40nm fab node. GTX580 has a power draw limitation system on the board, the GTX480 does not...

    What else... support for full speed FP16 texture filtering which enhances performance in texture heavy applications. New tile formats which improve Z-cull efficiency...

    So how does displayport 1.2 warrant the 68x0 name for AMD but the few changes above do not warrant the 5x0 name for nVidia?

    I call BS.
  • Griswold - Wednesday, November 10, 2010 - link

    I call your post bullshit.

    The 580 comes with the same old video engine as the GF100 - if it was so close to GF104, it would have that video engine and all the goodies and improvements it brings over the one in the 480 (and 580).

    No, GT580 is a fixed GF100 and most of what you listed there supports that because it fixes what was broken with the 480. Thats all.
  • Sihastru - Wednesday, November 10, 2010 - link

    I'm not sure what you mean... maybe you're right... but I'm not sure... If you're referring to bitstreaming support, just wait for a driver update, the hardware supports it.

    See: http://www.guru3d.com/article/geforce-gtx-580-revi...

    "What is also good to mention is that HDMI audio has finally been solved. The stupid S/PDIF cable to connect a card to an audio codec, to retrieve sound over HDMI is gone. That also entails that NVIDIA is not bound to two channel LPCM or 5.1 channel DD/DTS for audio.

    Passing on audio over the PCIe bus brings along enhanced support for multiple formats. So VP4 can now support 8 channel LPCM, lossless format DD+ and 6 channel AAC. Dolby TrueHD and DTS Master Audio bit streaming are not yet supported in software, yet in hardware they are (needs a driver update)."

    NEVER rely just on one source of information.

    Fine, if a more powerful card then the GTX480 can't be named the GTX580 then why is a lower performing then the HD5870 card is ok to be named HD6870... screw technology, screw refinements, talk numbers...

    Whatever...
  • Ryan Smith - Wednesday, November 10, 2010 - link

    To set the record straight, the hardware does not support full audio bitstreaming. I had NV themselves confirm this. It's only HDMI 1.4a video + the same audio formats that GTX 480 supported.
  • B3an - Wednesday, November 10, 2010 - link

    You can all argue all you want, but at the end of the day, for marketing reasons alone, NV really didn't have much of a choice but to name this card the 580 instead of 485 after ATI gave there cards the 6xxx series names. Which dont deserve a new series name either.
  • chizow - Tuesday, November 9, 2010 - link

    No ATI's new naming convention makes no sense at all. Their x870 designation has always been reserved for their Single-GPU Flagship part ever since the HD3870, and this naming convention has held true through both the HD4xxx and HD5xxxx series. But the 6870 clearly isn't the flagship of this generation, in fact, its slower than the 5870 while the 580 is clearly faster than the 480 in every aspect.

    To further complicate matters, ATI also launched the 5970 as a dual-GPU part, so single-GPU Cayman being a 6970 will be even more confusing and will also be undoubtedly slower than the 5970 in all titles that have working CF profiles.

    If anything, Cayman should be 5890 and Barts should be 5860, but as we've seen from both caps, marketing names are often inconvenient and short-sighted when they are originally designated......
  • Galid - Tuesday, November 9, 2010 - link

    We're getting into philosophy there. Know what's a sophism? An argument that seems strong but isn't because there's a fail in it. The new honda 2011 ain't necessarily better than the 2010 because it's newer.

    They name it differently because it's changed and wanna make you believe it's better but history proved it's not always the case. So the argument of newer generation means better is a false argument. Not everything new ''gotta'' be better in every way to live up to it's name.

    But it's my opinion.
  • Galid - Tuesday, November 9, 2010 - link

    It seems worse but that rebranding is all ok in my mind as it comes the 6870 comes in at a cheaper price than the 5870. So everyone can be happy about it. Nvidia did worse rebranding some of the 8xxx series into 9xxx chips for higher price but almost no change and no more performance. 9600gt comes to my mind...

    What is 9xxx series? a remake of a ''better'' 8xxx series. What is GTS3xx series, remake of GTx2xx, what is GTX5xx, .... and so on. Who cares? If it's priced well it's all ok. When I see someone going at staples to get a 9600gt at 80$ and I know I can get a 4850 for almost the same price, I say WTF!!!

    GTX580 deserve the name they want to give it. Whoever tries to understand all that naming is up to him. But whoever wants to pay example 100$ for a card should get performance according to that and it seems more important than everything else to me!

Log in

Don't have an account? Sign up now