• What
    is this?

    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.

    PRESENTED BY

Meet The 6990, Cont

Moving on from cooling, let’s discuss the rest of the card. From a power perspective, the 6990 is fed by 2 PCIe 8pin sockets on top of the PCIe bus power. At 150W + 150W + 75W this adds up to the 375W limit of the card. As was the case on the 5970, any increase in power consumption will result in exceeding the specifications for PCIe external power, requiring a strong power supply to drive the card. 375W is and of itself outside of the PCIe specification, and we’ll get to the importance of that in a bit. Meanwhile as was the case on the 5970, at default clocks the GPUs on the 6990 are undervolted to help meet the TDP. AMD is running the 6990 GPUs at 1.12v here, binning Cayman chips to get the best GPUs needed to run at 830MHz at this lower voltage.

At the end of the day power has a great deal of impact on GPU performance, so in increasing the performance of the 6990 over the 5970 AMD has played with both the amount of power the card can draw at default settings (which is why we’re at 375W now) and they have been playing with power management. By playing with power management we’re of course referring to PowerTune, which was first introduced on the 6900 series back in December. By capping the power consumption of a card at a set value and throttling the card back if it exceeds it, AMD can increase GPU clocks without having to base their final clocks around the power consumption of outliers like FurMark. The hardest part of course is finding balance – set your clocks too high for a specific wattage and everything throttles which is counterproductive and leads to inconsistent performance, but if clocks are too low you’re losing out on potential performance.

AMD Radeon HD 6990 PowerTune Throttling
Game/Application Throttled?
Crysis: Warhead No
BattleForge No
Metro Yes (780Mhz)
HAWX No
Civilization V No
Bad Company 2 No
STALKER No
DiRT 2 No
Mass Effect 2 No
Wolfenstein No
3DMark Vantage Yes
MediaEspresso 6 No
Unigine Heaven No
FurMark Yes (580MHz)
Distributed.net Client Yes (770MHz)

It’s the increase in power consumption and the simultaneous addition of PowerTune that has allowed AMD to increase GPU clocks by as much as they have over the 5970. Cayman as an architecture is faster than Cypress in the first place, but having a 105MHz core clock advantage really seals the deal. At default settings PowerTune appears to be configured nearly identically on the 6990 as it is the 6970: FurMark heavily throttles, while Metro and the newly updated Distributed.net client experience slight throttling. The usual PowerTune configuration range of +/- 20% is available, allowing a card in its default configuration to be set between 300W and 450W for its PowerTune limit.

While we’re on the subject of PowerTune, there is one thing we were hoping to see that we did not get: dynamic limits based on CrossFire usage. This isn’t a complaint per-se as much as it is a pie-in-the-sky idea. Perhaps the biggest downside to a dual-GPU card for performance purposes is that they can’t match a single high-end card in terms of clocks when only a single GPU is in use, as clocks are kept low to keep total dual-GPU power consumption down. One thing we’d like to see in the future is for GPU1 to be allowed to hit standard GPU clocks (e.g. 880MHz) when GPU2 is not in use, with PowerTune arbitrating over matters to keep total power consumption in check. This would allow cards like the 6990 to be as fast as high-end single-GPU cards in tasks that don’t benefit from CrossFire, such as windowed mode games, emulators, GPGPU applications, and games that don’t have a CF profile. We’re just thinking out-loud here, but the potential is obvious.

Moving on, as with the 5970 and 2GB 5870 the 6990 is outfitted with 16 RAM chips, 8 per GPU. Half are on the front of the PCB and the other half are on the rear. The card’s backplate provides protection and heat dissipation for the rear-mounted RAM. In one of the few differences from the 6970, the 6990 is using 5GHz GDDR5 instead of 6GHz GDDR5 – our specific sample is using 2Gb Hynix T2C modules. This means the 5GHz stock speed of the card already has the RAM running for as much as it’s spec’d for. Hynix’s datasheets note that 6GHz RAM is spec’d for 1.6v at 6GHz, versus 1.5v at 5GHz for 5GHz RAM. So the difference likely comes down to a few factors: keeping RAM power consumption down, keeping costs down, and any difficulties in running RAM above 5GHz on such a cramped design. In any case we don’t expect there to be much RAM overclocking headroom in this design.

Finally, display connectivity has once again changed compared to both the 5970 and 6970. As Cayman GPUs can only drive 1 dual-link DVI monitor, AMD has dropped the 2nd SL-DVI port and HDMI port in favor of additional mini-DisplayPorts. While all Cayman GPUs (and Cypress/5800 before it) can drive up to 6 monitors, the only way to do so with 1 slot’s worth of display connectors is either through 6 mini-DP ports (ala Eyefinity-6), or through using still-unavailable MST hubs to split DP ports. The 6990 is a compromise in this design – an E6 design requires an expensive DP to DL-DVI adaptor to drive even 1 DL-DVI monitor, while  a 5970-like design of 2x DVI + 1 mini-DP doesn’t allow 6 monitors in all cases even with MST hubs. The end result is 1 DL-DVI port for 2560x1600/2560x1440 legacy monitors, and 4 more mini-DP ports for newer monitors. This allows the 6990 to drive 5 monitors today, and all 6 monitors in the future when MST hubs do hit the market.

As with the 5870E6 cards, AMD is going to be stipulating that partners include adapters in order to bridge the DisplayPort adoption. All 6990s will come with 1 passive SL-DVI adapter (taking advantage of the 3rd TDMS transmitter on Cayman), 1 active SL-DVI adapter, and 1 passive HDMI adapter. Between the card’s on-board connectivity options and adapters it’s possible to drive just about any combination short of multiple DL-DVI monitors, including the popular 3 monitor 1080P Eyefinity configuration.


Active SL-DVI Adapter

With all of this said, the change in cooling design and power consumption/heat dissipation does require an additional level of attention towards making a system work, beyond even card length and power supply considerations. We’ve dealt with a number of high-end cards before that don’t fully exhaust their hot air, but nothing we’ve reviewed is quite like the 6990. Specifically nothing we’ve reviewed was a 12” card that explicitly shot out 185W+ of heat directly out of the rear of the card; most of the designs we see are much more open and basically drive air out at all angles.

The critical point is that the 6990 is dumping a lot of hot air in to your case, and that it’s doing so a foot in front of the rear of the case. Whereas the 5970 was moderately forgiving about cooling if you had the space for it, the 6990 will not be. You will need a case with a lot of airflow, and particularly if you overclock the 6990 a case that doesn’t put anything of value directly behind the 6990.

To make a point, we quickly took the temperatures of a 500GB Seagate hard drive in our GPU test rig when placed in the drive cage directly behind the 6990 in PEG slot 1. As a result the 6990 is directly blowing on the hard drive. Note here that normally we have a low-speed 120mm fan directly behind PEG 1, which we have to remove to make room for the 5970 and 6990. All of these tests were run with Crysis in a loop – so they aren’t the highest possible values we could achieve.

Seagate 500GB Hard Drive Temperatures
Video Card Temperature
Radeon HD 6990 37C
Radeon HD 6990OC 40C
Radeon 6970CF 27C
Radeon HD 5970 31C

At default clocks for the 6990 our hard drive temperature is 37C, while overclocked this reaches 40C. Meanwhile if we replace the 6990 with the 5970, this drops to 31C. Replace that with a pair of 6970s in CrossFire and our 120mm fan, and that drops even more to 27C. So the penalty for having a dual-exhaust card like the 6990 as far as our setup is concerned is 6C compared to a long directed card like the 5970, and 10C compared to a pair of shorter 6970s and an additional case fan. The ultimate purpose of this exercise is to illustrate how placing a hard drive (or any other component) behind the 6990 is a poor choice. As many cases do have hard drive bays around this location, you’d be best served putting your drives as far away from a 6990 as possible.

And while we haven’t been able to test this, as far as air overclocking is concerned the best step may to take this one step further and turn the closest air intake in to an exhaust. A number of cases keep an intake at the front of the case roughly in-line with PEG slot 1; turning this in to an exhaust would much more effectively dissipate the heat that the 6990 is throwing in to the case, and this may be what AMD was going for all along. Video cards that vent air out of the front and the rear of the case, anyone?

Ultimately the 6990 is a doozy, the likes of which we haven’t seen before. Between its greater power consumption and its dual-exhaust cooler, it requires a greater attention to cooling than any other dual-GPU card. Or to put this another way, it’s much more of a specialized card than the 5970 was.

Meet The 6990 Once Again The Card They Beg You To Overclock
POST A COMMENT

130 Comments

View All Comments

  • nafhan - Tuesday, March 08, 2011 - link

    I generally buy cards in the $100-$200 range. Power usage has gone up a bit while performance has increased exponentially over the last 10 years. Reply
  • LtGoonRush - Tuesday, March 08, 2011 - link

    I'm disappointed at the choices AMD made with the cooler. The noise levels are truly intolerable, it seems like it would have made more sense to go with a triple-slot card that would be more capable of handling the heat without painful levels of noise. It'll be interesting to see how the aftermarket cooler vendors like Arctic Cooling and Thermalright handle this. Reply
  • Ryan Smith - Tuesday, March 08, 2011 - link

    There's actually a good reason for that. I don't believe I mentioned this in the article, but AMD is STRONGLY suggesting not to put a card next to the 6990. It moves so much air that another card blocking its airflow would run the significant risk of killing it.

    What does this have to do with triple-slot coolers? By leaving a space open, it's already taking up 3 spaces. If the cooler itself takes up 3 spaces, those 3 spaces + 1 open space is now 4 spaces. You'd be hard pressed to find a suitable ATX board and case that could house a pair of these cards in Crossfire if you needed 8 open spaces. Triple slot coolers are effectively the kryptonite for SLI/CF, which is why NVIDIA isn't in favor of them either (but that's a story for another time).
    Reply
  • arkcom - Tuesday, March 08, 2011 - link

    2.5 slot cooler. That would guarantee at least half a slot is left for airspace. Reply
  • Quidam67 - Tuesday, March 08, 2011 - link

    if it means a quieter card then that might have been a compromise worth making. Also, 2.5 would stop people from making the il-advised choice of using the slot next to the card, thus possibly killing it! Reply
  • strikeback03 - Tuesday, March 08, 2011 - link

    With the height of a triple slot card maybe they could mount the fan on an angle to prevent blocking it off. Reply
  • kilkennycat - Tuesday, March 08, 2011 - link

    Triple-slot coolers... no need!!

    However, if one is even contemplating Crossfire or SLI then a triple-slot space between the PCIe X16 SOCKETS for a pair of high-power 2-slot-cooler graphics cards with "open-fan" cooling (like the 6990) is recommended to avoid one card being fried by lack of air. This socket-spacing allows a one-slot clear air-space for the "rear" card's intake fan to "breathe". (Obviously, one must not plug any other card into any motherboard socket present in this slot)

    In the case of a pair of 6990 (or a pair of nVidia's upcoming dual-GPU card), a minimum one-slot air-space between cards becomes MANDATORY, unless custom water or cryo cooling is installed.

    Very few current Crossfire/SLI-compatible motherboards have triple-slot (or more) spaces between the two PCIe X16 connectors while simultaneously also having genuine X16 data-paths to both connectors. That socket spacing is becoming more common with high-end Sandy-Bridge motherboards, but functionality may still may be constrained by X8 PCIe data-paths at the primary pair of X16 connectors.

    To even attempt to satisfy the data demands of a pair of 6990 Cross-Fire with a SINGLE physical CPU, you really do need a X58 motherboard and a Gulftown Corei7 990x processor, or maybe a Corei7 970 heavily overclocked. For X58 motherboards with triple-spaced PCIe sockets properly suitable for Crossfire or SLI , you need to look at the Asrock X58 "Extreme" series of motherboards. These do indeed allow full X16 data-paths to the two primary PCIe X16 "triple-spaced" sockets.

    Many ATX motherboards have a third "so-called" PCIe X16 socket in the "slot7" position. However, this slot is always incapable of a genuine X16 pairing with either of the other two "X16" sockets, Anyway this "slot 7" location will not allow any more than a two-slot wide card when the motherboard is installed in a PC "tower" -- an open-fan graphics card will have no proper ventilation here, as it comes right up against either the power-supply (if bottom-loaded) or the bottom-plate of the case.
    Reply
  • Spazweasel - Tuesday, March 08, 2011 - link

    Exactly. For people who are going to do quad-Crossfire with these, you pretty much have to add the cost of a liquid cooling system to the price of the cards, and it's going to have to be a pretty studly liquid cooler too. Of course, the kind of person who "needs" (funny, using that word!) two of these is also probably the kind of person who would do the work to implement a liquid cooling system, so that may be less of an issue than it otherwise might be.

    So, here's the question (more rhetorical than anything else). For a given ultra-high-end gaming goal, say, Crysis @ max settings, 60fps @ 3x 2500x1600 monitors (something that would require quad Crossfire 69xx or 3-way SLI 580), with a targeted max temperature and noise level... which is the cheaper solution by the time you take into account cooling, case, high-end motherboard, the cards themselves? That's the cost-comparison that needs to be made, not just the cost of the cards themselves.
    Reply
  • tzhu07 - Tuesday, March 08, 2011 - link

    Before anyone thinks of buying this card stock, you should really go out and get a sense of what that kind of noise level is like. Unless you have a pair of high quality expensive noise-cancelling earbuds and you're playing games at a loud volume, you're going to constantly hear the fan.

    $700 isn't the real price. Add on some aftermarket cooling and that's how much you're going to spend.

    Don't wake the neighbors...
    Reply
  • Spivonious - Tuesday, March 08, 2011 - link

    70dB is the maximum volume before risk of hearing loss, according to the EPA. http://www.epa.gov/history/topics/noise/01.htm

    Seriously, AMD, it's time to look at getting more performance per Watt.
    Reply

Log in

Don't have an account? Sign up now