Meet the GeForce GTX 680

All things considered the design of the GeForce GTX 680 is not a radical departure from the GTX 580, but at the same time it also has some distinct differences owing to the fact that its TDP is some 50W lower than GTX 580.

Like the past GTX x80 cards, the basic design of the GTX 680 is that of a blower. A radial fan at the rear of the card sucks in air and pushes it towards the front of the card. Notably, due to a combination of card length and the fan position, the “wedge” around the fan has been done away with. NVIDIA tells us that this shouldn’t significantly impact the cooling of the card, particularly since it has a lower TDP in the first place, but when used in SLI it will remove some of the breathing room than the GTX 580 enjoyed.

Looking at the fan itself, compared to the GTX 580 the fan has been moved from the center of the card to the top of the card. This is due to NVIDIA’s port configuration, which uses a stacked DVI connector that consumes what would have normally been part of the exhaust vent on the GTX 580. We’ll get into the port configuration more in a minute, but for the moment the significance is that because the GTX 680 only has half a vent NVIDIA has moved the fan to match the vent, which is why the fan has been moved up.

On that note, the repositioning of the fan also had its own ramifications. Because the fan is now so close to the top and at the same time so close to the rear, NVIDIA went with a unique method of arranging the PCIe power sockets. Rather than having them side-by-side as we’ve seen on countless NVIDIA cards in the past, the sockets are stacked on each other in a staggered configuration. With the fan otherwise occupying the space that one of the sockets would take up, this configuration allowed NVIDIA to have two sockets without lengthening the card just to fit another socket. Overall this staggered design is not too difficult to work with, though with one socket facing the opposite way it might require some cable repositioning if you have a well maintained cable run.

Moving on, when we remove the shroud on the GTX 680 we see the fan, baseplate, and heatsink in full detail. NVIDIA is using an aluminum fin stacked heatsink, very similar to what we saw on the GTX 580. Underneath the heatsink NVIDIA is using a set of three heatpipes to transfer heat between the GPU and the heatsink. This is as opposed to the vapor chamber on the GTX 580, and while this setup doesn’t allow empirical testing, given the high efficiency of vapor chambers it’s likely that this isn’t quite as efficient, though to what degree we couldn’t say.

Finally, after removing the fan, baseplate, and heatsink, we can see the PCB in full detail. Unlike GF110 and GF114, GK104 is not capped with an IHS, allowing for the heatsink to directly come in contact with the GPU die. Meanwhile arranged around the GPU we can see the 8 2Gb GDDR5 RAM modules that give the GTX 680 its 2GB of RAM. These are Hynix R0C modules, which means they’re rated for 6GHz, the stock memory speed for the GTX 680. Overall the card measures 10” long with no overhang from the shroud, making it 0.5” shorter than the GTX 580.  

Looking at the top of the card, as always we see the SLI connectors. Following in the footsteps of the GTX 580, the GTX 680 features 2 SLI connectors, allowing for up to 3-way SLI.

Meanwhile at the front of the card we see the I/O bracket. As we alluded to previously, the GTX 680 uses a stacked DVI design here; NVIDIA has done everything they can to keep the DVI ports at the very bottom of the card to avoid impeding airflow, but the upper DVI port still occupies roughly 40% of what would otherwise be the vent. Altogether the GTX 680 features 2 DL-DVI ports, a full size HDMI port, and a full size DisplayPort.

While NVIDIA has used DVI and HDMI ports for quite some time, this is the first time NVIDIA has included DIsplayPort on a reference design. Unfortunately we find that this ruffles our feathers a bit, although this isn’t strictly NVIDIA’s fault. As we’ve covered in the past, DisplayPort comes in both a full size and miniDP configuration – AMD in particular has used miniDP since the Radeon HD 6800 series in 2010. And while we’re happy to see DisplayPort finally make it into an NVIDIA reference design, the fact that it’s a full size DisplayPort is less than encouraging because at this point in time DisplayPort has largely been replaced by miniDP.

Ultimately the fault for this lies more with the VESA than NVIDIA, but it’s indicative of a larger problem in the DisplayPort community in that both full size DP and miniDP are equally valid and equally capable ports. While full size DisplayPort has the distinction of coming first, thanks in large part to Apple it has largely been displaced by miniDP as the most common variant on source devices. The problem with this is that both miniDP and DisplayPort are now in wide use; wide, redundant use.

At this point desktop computers and video cards coming with full size DisplayPorts is silly at best, and frustrating at worst. The laptop guys aren’t going to give up miniDP due to the space savings, and there’s no significantly good reason to use DisplayPort on desktops when miniDP offers the same functionality. We would rather see the PC industry standardize on miniDP across all source devices, and thereby eliminate any ambiguity with regards to what cables or adaptors are necessary. DisplayPort adoption has been slow enough – having 2 variants of the port on source devices only makes it more confusing for everyone.

Finally, while we’re on the subject of display connectivity we quickly took a look at how the idle clockspeeds of GTX 680 are impacted by the use of multiple displays. With 2 displays GTX 680 can utilize its full idle clocks, but only if both displays are connected via a TMDS type connection (DVI/HDMI) and run with identical timings. But if different timings are used or if one display is connected via DisplayPort, then the GTX 680 will shift to its low power 3D clocks. However if we expand that to 3 monitors and enable NVIDIA Surround, then the GTX 680 can operate at full idle regardless of whether DisplayPort is used or not.

GPU Boost: Turbo For GPUs The Test
Comments Locked

404 Comments

View All Comments

  • Sabresiberian - Thursday, March 22, 2012 - link

    About what?

    Are you trying to blame the President for high video card prices?

    Please.
  • CeriseCogburn - Thursday, March 22, 2012 - link

    It's called inflation, try to get the idea, not some fanboy hurt.
  • silverblue - Thursday, March 22, 2012 - link

    What on earth is your problem? Where is there any sign of fanboyism in the three posts above you? He's got a valid point - cards are getting faster but they're getting more expensive at the same time.

    Successful troll is successful, it seems.
  • CeriseCogburn - Friday, March 23, 2012 - link

    He said he got it in a sale. A few months ago it was $475 used on ebay (bitcoin).
    Many cards have come out at $500 and above.
    The 580 was well over $500 for a long time, as was the 5870.
    I guess you're mad because he;s wrong.
  • silverblue - Friday, March 23, 2012 - link

    Reading comprehension isn't your friend. He's quite clearly talking about a 5870 that he bought for $325 when the prices were reduced two years back. He's also saying that there isn't a card in the region of $325 (give or take a few dollars) that is worth upgrading to right now, so in essence, he's asking what point there is to buying a new card.

    You're reading far too much into this. He still owns his 5870... which, by the way, never cost $500 at launch. The Eyefinity 6 Edition, perhaps, but not the vanilla 5870 - try $429 as an upper limit (and we're talking US Dollars; not my currency of choice but it's the norm for this site).
  • CeriseCogburn - Saturday, March 24, 2012 - link

    Yes of course. So we have the 5870 v 7970 crysis W (page it happens to be on)
    40 vs 69
    24 vs 42
    16 vs 26

    I see 3 resolutions where not or barely playable becomes playable.

    So then it comes down to more exaggeration I objected to at the start.
    I wouldn't object saying the 5870 can turn it down satisfy - but the increase is clearly game making.
    Is it the GTX580 that was so far ahead yet denounced for core size and heat that is causing the disconnect ? I'd say so.
  • B-Unit1701 - Thursday, March 22, 2012 - link

    Hence why Im still rocking a 4870. Picked one up almost exactly 3 years ago for $250, and haven't been convinced to upgrade until this generation. Will likely jump on a 7850 or a 7870 depending where prices land after nVidia's full launch.
  • CeriseCogburn - Thursday, March 22, 2012 - link

    The power envelope. The standard 300watts for pci-e (maybe they will raise it) is already exceeded on single cards.
    Where can you go but to 28mn and how is that not going to also bust power specs ?
    Monitor resolutions have increased in 2 years.
    OS has sucked up some.
    Game patches enhance.
    --
    What's wrong is there is a limit that cannot be passed right now smaller nm must be achieved. Smaller ram with 4x the GB...
    Two years may seem long but is it really ?
    Many multi billion dollar upgrades to foundries require constant overhauls to move nodes.
    Amazing is the whole thing hasn't collapsed already.
  • fhaddad78 - Thursday, March 22, 2012 - link

    I'm confused by the dialog taking place in this conversation. Unless I am not understanding the results correctly, it seems to me the GTX 680 is a great card, there are some games where it performs really well, and there are other games where it's being outperformed by even older generation products. To me, this card (in it's current state) is more of a side-grade than an upgrade.

    Am I missing something?
  • prophet001 - Thursday, March 22, 2012 - link

    only hysteria

Log in

Don't have an account? Sign up now