The Card They Beg You to Overclock

As AMD equipped 5970 with a fully functional Cypress core, one particularly binned for its excellent performance, it’s a shame the 5970 is only clocked at 725MHz core, right? AMD agrees, and has equipped and will be promoting the 5970 in a manner unlike any previous AMD video card.

Officially, AMD and its vendors can only sell a card that consumes up to 300W of power. That’s all the ATX spec allows for; anything else would mean they would be selling a non-compliant card. AMD would love to sell a more powerful card, but between breaking the spec and the prospect of running off users who don’t have an appropriate power supply (more on this later), they can’t.

But there’s nothing in the rulebook about building a more powerful card, and simply selling it at a low enough speed that it’s not breaking the spec. This is what AMD has done.

As a 300W TDP card, the 5970 is entirely overbuilt. The vapor chamber cooling system is built to dissipate 400W, and the card is equipped entirely with high-end electronics components, including solid caps and high-end VRMs.

Make no mistake: this card was designed to be a single-card 5870CF solution; AMD just can’t sell it like that. In our discussions with them they nearly (as much as Legal would let them) promised that every card will be able to hit 850MHz core (after all, these chips are binned to be better than a 5870), and memory speeds were nearly as optimistic, although we were given the impression that AMD is a little more concerned about GDDR5 memory bus issues at 5870 speeds.

So with the card that is a pair of 5870s in everything except the shipping specifications, AMD has gone ahead and left it up to the user to put 2 + 2 together, and to bring the card to its full potential. The card ships with a much higher Overdrive cap than AMD’s other cards; instead of 10-20%, here the caps are 1GHz for the core and 1.5GHz for the memory, a 37% and 50% cap respectively (in comparison, on the 5850, the caps were set below the 5870’s stock speeds). The card effectively has unlimited overclocking headroom within Overdrive; we doubt that any 5970 is going to hit those speeds with air cooling.

One weakness of Overdrive is that it doesn’t let you tweak voltages, which is a problem since AMD has to ship this card at lower voltages in order to meet the 294W TDP. In order to rectify that, AMD will be supplying vendors with a voltage tweaking tool specifically for the 5970, which will then be customized and distributed by vendors to their 5970 users.

Normally any kind of voltage tweaking on a video card makes us nervous due to the lack of guidance – a single GPUs doesn’t ship at a wide range of voltages after all. For overvolting the 5970, AMD has made matters quite simple: you only get one choice. The utility we’re using offers two voltages for the core, and two for the memory, which are the shipping voltages and the voltages the 5870 runs at. So you can run your 5970 at 1.05v core or 1.165v core, but nothing higher and nothing in between. It makes matters simple, and locks out the ability to supply the core with more voltage than it can handle. We haven’t seen any of the vendor-customized versions of the Overvolt utility, but we’d expect all of them to have the same cap, if not the same two-setting limit.

All of this comes at a cost however: power. Cranking up the voltage in particular will drive the power draw of the card way up, and this is the point where the card ceases to meet the PCIe specification. If you want to overclock this card, you’re going to need not just a strong power supply that can deliver its rated wattage, but you’re going to need a power supply that can overdeliver on the rails attached to the PCIe power plugs.

For overclocked operation, AMD is recommending a 750W power supply, capable of delivering at least 20A on the rail the 8pin plug is fed from, and another 15A on the rail the 6pin plug is fed from. There are a number of power supplies that can do this, but you need to pay very close attention to what your power supply can do. Frankly we’re just waiting for a sob-story where this card cooks a power supply when overvolted. Overclocking the 5970 will bring the power draw out of spec, its imperative you make sure you have a power supply that can handle it.

Overall the whole issue leaves us with an odd taste in our mouths. Clearly AMD would have rewritten the ATX spec to allow for more power if it were that simple, and we don’t believe anyone really wants to be selling a card that runs out of spec like this. Both AMD and NVIDIA are going to have to cope with the fact that power draw has been increasing on their cards over time, so this isn’t going to be the last over-300W card we see. I would not be surprised if we saw a newer revision of the ATX spec that allowed for more power for video cards – if you can cool 400W, then that’s where the new maximum is going to be for luxury video cards like the 5970.

Last, but certainly not least, there’s the matter of real-world testing. Although AMD told us that the 5970 should be able to hit 5870 clockspeeds, we actually didn’t have the kind of luck we were expecting to have. We have 2 5970s,one for myself, and one for Anand for Eyefinity and power/noise/heat testing. My 5970 hit 850MHz/1200MHz once overvolted (it had very little headroom without it), but the performance was sporadic. The VRM overcurrent protection mechanism started kicking in and momentarily throttling the card down to 550MHz/1000MHz, and not just in FurMark/OCCT. Running a real application (the Distributed.net RC5-72 Stream client) ultimately resulted in the same thing. With the core overvolted, our card kept throttling on FurMark all the way down to 730MHz. While the card is stable in terms of not crashing, or verdict is that our card is not capable of performing at 5870 clockspeeds.

We’ve attempted to isolate the cause of this, and we feel we can rule out temperature after feeding the card cold morning air had no effect. This leaves us with power. The power supply we use is a Corsair 850TX, which has a single 12V rail rated for 70A. We do not believe that the issue is the power supply, but we don’t have another unit on hand to test with, so we can not eliminate it. Our best guess is that in spite of the high-quality VRMs that are on this card, that they simply aren’t up to the task of powering the card at 5870 speeds and voltages.

We’ve gone ahead and done our testing at these speeds anyhow (since overcurrent protection doesn’t cause any quality issues), however it’s likely that these results are retarded somewhat by throttling, and that a card that can avoid throttling would perform slightly better. We're going to be retesting this card in the morning with some late suggestions from AMD (mainly forcing the fan to 100%) to see if this changes things, but we are fairly confident right now that it's not heat related.

As for Anand's card, his fared even worse. His card locked up his rig when trying to run OCCT at 5870 speeds. VRM throttling is one thing, but crashing is another; even if it's OCCT, it shouldn't be happening. We've written his card off as being unstable at 5870 speeds, which makes us 0-for-2 in chasing the 5870CF. Reality is currently in conflict with AMD's promises.

Note: We have since published an addendum blog covering VRM temperatures, the culprit for our throttling issues

40nm Supply Redux STALKER: Call of Pripyat – A Peak at More DX11
Comments Locked

114 Comments

View All Comments

  • Paladin1211 - Saturday, November 21, 2009 - link

    To be precise, anything above the monitor refresh rate is not going to be recognizable. Mine maxed out at 60Hz 1920x1200. Correct me if I'm wrong.

    Thanks :)
  • noquarter - Saturday, November 21, 2009 - link

    If you read AnandTech's 'Triple Buffering: Why We Love It' article, there is a very slight advantage at more than 60fps even though the display is only running at ~60Hz. If the GPU finishes rendering a frame immediately after the display refresh then that frame will be 16ms stale by the time the display shows it as it won't have the next one ready in time. If someone started coming around the corner while that frame is stale it'd be 32ms (stale frame then fresh frame) before the first indicator showed up. This is simplified as with v-sync off you'll just get torn frames but the idea is still there.

    To me, it's not a big deal, but if you're looking at a person with quick reaction speed of 180ms, 32ms of waiting for frames to catch up could be significant I guess. If you increase the fps past 60 you're more likely to have a fresh frame rendered right before each display refresh.
  • T2k - Friday, November 20, 2009 - link

    Seriously: is he no more...? :D
  • XeroG1 - Thursday, November 19, 2009 - link

    OK, so seriously, did you really take a $600 video card and benchmark Crysis Warhead without turning it all the way up? The chart says "Gamer Quality + Enthusiast Shaders". I'm wondering if that's really how you guys benchmarked it, or if the chart is just off. But if not, the claim "Crysis hasn’t quite fallen yet, but it’s very close" seems a little odd, given that you still don't have all the settings turned all the way up.

    Incidentally, I'm running a GeForce 9800 GTX (not plus) and a Core2Duo E8550, and I play Warhead at all settings enthusiast, no AA, at 1600x900. At those settings, it's playable for me. People constantly complain about performance on that title, but really if you just turn down the resolution, it scales pretty well and still looks better than anything else on the market IMHO.
  • XeroG1 - Thursday, November 19, 2009 - link

    Er, oops - that was supposed to say "E8500", not "E8550", since there is no 8550.
  • mapesdhs - Thursday, November 19, 2009 - link


    Carnildo writes:
    > ... I was the administrator for a CAVE system. ...

    Ditto! :D


    > ... ported a number of 3D shooters to the platform. You haven't
    > lived until you've seen a life-sized opponent come around the
    > corner and start blasting away at you.

    Indeed, Quake2 is amazing in a CAVE, especially with both the player
    and the gun separately motion tracking - crouch behind a wall and be
    able to stick your arm up to fire over the wall - awesome! But more
    than anything as you say, it's the 3D effect which makes the experience.

    As for surround-vision in general... Eyefinity? Ha! THIS is what
    you want:

    http://www.sgidepot.co.uk/misc/lockheed_cave.jpg">http://www.sgidepot.co.uk/misc/lockheed_cave.jpg

    270 degree wraparound, 6-channel CAVE (Lockheed flight sim).

    I have an SGI VHS demo of it somewhere, must dig it out sometime.


    Oh, YouTube has some movies of people playing Quake2 in CAVE
    systems. The only movie I have of me in the CAVE I ran was
    a piece taken of my using COVISE visualisation software:

    http://www.sgidepot.co.uk/misc/iancovise.avi">http://www.sgidepot.co.uk/misc/iancovise.avi

    Naturally, filming a CAVE in this way merely shows a double-image.


    Re people commenting on GPU power now exceeding the demands for
    a single display...

    What I've long wanted to see in games is proper modelling of
    volumetric effects such as water, snow, ice, fire, mud, rain, etc.
    Couldn't all this excess GPU power be channeled into ways of better
    representing such things? It would be so cool to be able to have
    genuinely new effects in games such as naturally flowing lava, or
    an avalanche, or a flood, tidal wave, storm, landslide, etc. By this
    I mean it being done so that how the substance behaves is governed
    by the environment in a natural way (physics), not hard coded. So far,
    anything like this is just simulated - objects involved are not
    physically modelled and don't interact in any real way. Rain is
    a good example - it never accumulates, flows, etc. Snow has weight,
    flowing water can make things move, knock you over, etc.

    One other thing occurs to me: perhaps we're approaching a point
    where a single CPU is just not enough to handle what is now possible
    at the top-end of gaming. To move them beyond just having ever higher
    resolutions, maybe one CPU with more & more cores isn't going to
    work that well. Could there ever be a market for high-end PC
    gaming with 2-socket mbds? I do not mean XEON mbds as used for
    servers though. Just thoughts...

    Ian.

  • gorgid - Thursday, November 19, 2009 - link

    WITH THEIR CARDS ASUS PROVIDES THE SOFTWARE WHERE YOU CAN ADJUST CORE AND MEMORY VOLTAGES. YOU CAN ADJUST CORE VOLTAGE UP TO 1.4V

    READ THAT:
    http://www.xtremesystems.org/forums/showthread.php...">http://www.xtremesystems.org/forums/sho...cd1d6d10...

    I ORDERED ONE FROM HERE:

    http://www.provantage.com/asus-eah5970g2dis2gd5a~7...">http://www.provantage.com/asus-eah5970g2dis2gd5a~7...


  • K1rkl4nd - Wednesday, November 18, 2009 - link

    Am I the only one waiting for TI to come out with a 3x3 grid of 1080p DLPs? You'd think if they can wedge ~2.2 million mini-mirrors on a chip, they should be able to scale that up to a native 5760x3240. Then they could buddy up with Dell and sell it as an Alienware premium package of display + computer capable of using it.
  • skrewler2 - Wednesday, November 18, 2009 - link

    When can we see benchmarks of 2x 5970 in CF?
  • Mr Perfect - Wednesday, November 18, 2009 - link

    "This means that it’s not just a bit quieter to sound meters, but it really comes across that way to human ears too"

    Have you considered using the dBA filter rather then just raw dB? dBA is weighted to measure the tones that the human ear is most sensitive to, so noise-oriented sites like SPCR use dBA instead.

Log in

Don't have an account? Sign up now