Meet The GeForce GTX 690

Much like the GTX 680 launch and the GTX 590 before it, the first generation of GTX 690 cards are reference boards being built by NVIDIA, with NVIDIA using their partners for distribution and support. In fact NVIDIA is enforcing some pretty strict standards on their partners to maintain a consistent image of the GTX 690 – not only will all of the launch cards be based off of NVIDIA’s reference design, but NVIDIA’s partners will be severely restricted in how they can dress up their cards, with stickers not being allowed anywhere on the shroud. Partners will only be able to put their mark on PCB, meaning the bottom and the rear of the card. In the future we’d expect to see NVIDIA’s partners do some customizing through waterblocks and such, but for the most part this will be the face of the GTX 690 throughout its entire run.

And with that said, what a pretty face it is.

Let’s get this clear right off the bat: the GTX 690 is truly a luxury video card. If the $1000 price tag didn’t sell that point, NVIDIA’s design choices will. There are a lot of design choices based on technical reasons, but at the same time NVIDIA has gone out of their way to build the GTX 690 out of metals instead of plastics not for major performance or quality reasons, but rather just because they can. The GTX 690 is a luxury video card and NVIDIA intends to make that fact unmistakable.

But before we get too far ahead of ourselves, let’s talk about basic design. At its most basic level, the GTX 690 is a reuse of the design principles of the GTX 590. With the exception of perhaps overclocking, the GTX 590 was a well-designed card that greatly improved on the design of past NVIDIA dual-GPU cards and managed to dissipate 365W of heat without sounding like a small hurricane. Since the GTX 690 is designed around the same power constraints and at the same time is a bit simpler in some regards – the GPUs are smaller and the memory busses narrower – NVIDIA has opted to reuse the GTX 590’s basic design.

The reuse of the GTX 590’s design means that the GTX 690 is a 10” long card with a double-wide cooler, making it the same size as the single-GPU GTX 680. The basis of the GTX 690’s cooler is single axial fan sitting at the center of the card, with a GPU and its RAM at either side. Heat from one GPU goes out the rear of the card, while the heat from the other GPU goes out the front. Heat transfer will once again be provided by a pair of nickel tipped aluminum heatsinks attached to vapor chambers, which also marks the first time we’ve seen a vapor chamber used with a 600 series card. Meanwhile a metal baseplate runs along the card at the same height as the top of the GPUs, not only providing structural rigidity but also providing cooling for the VRMs and RAM.

Compared to the GTX 590 NVIDIA has made a couple of minor tweaks however. The first is that NVIDIA has moved the baseplate a bit higher on the GTX 690 so that it covers all of the components other than the GPU, so that those components don’t need to stick through the baseplate. The idea here is that turbulence is reduced as airflow doesn’t need to deal with those obstructions, instead being generally driven by small channels in the baseplate. The second change is that NVIDIA has rearranged the I/O port configuration so that the stacked DVI connector is moved to the very bottom of the bracket rather than being in roughly the middle, maximizing just how much space is available for venting hot air out of the front of the card. In practice these aren’t huge differences – our test results don’t find the GTX 690 to be significantly quieter than the GTX 590 under gaming loads – but every bit helps.


Top: GTX 590. Bottom: GTX 690

Of course this design means that you absolutely need an airy case – you’re effectively dissipating 150W to 170W not just into your case, but straight towards the front of your case. As we saw with the GTX 590 and the Radeon HD 6990 this has a detrimental effect on anything that may be directly behind the video card, which for most cases is going to be the HDD cage. As we did with the GTX 590, we took some quick temperature readings with a hard drive positioned directly behind the GTX 690 in order to get an idea of the impact of exhausting hot air in this fashion.

Seagate 500GB Hard Drive Temperatures
Video Card Temperature
GeForce GTX 690 38C
Radeon HD 7970 28C
GeForce GTX 680 27C
GeForce GTX 590 42C
Radeon HD 6990 37C
Radeon HD 5970 31C

Unsurprisingly the end result is very similar to the GTX 590. The temperature increase is reduced some thanks to the lower TDP of the card, but we’re still driving up the temperature of our HDD by over 10C. This is still well within the safety range of a HDD and in principle should work, but our best advice to GTX 690 buyers is to keep any drive bays directly behind the GTX 690 clear, just in case. That’s the tradeoff for making it quieter and capable of dissipating more heat than older blower designs.

Moving on, let’s talk about the technical details of the GTX 590. GPU power is supplied by 10 VRM phases, divided up into 5 phases per GPU. Like many other aspects of the GTX 690 this is the same basic design as the GTX 590, which means that it should be enough to push up to 365W but it’s no more designed for overvolting than the GTX 590 was. Any overclocking potential with the GTX 690 will be based on the fact that the card’s default configuration is for 300W, allowing for some liberal adjustment of the power target.

Meanwhile the RAM on the GTX 690 is an interesting choice. NVIDIA is using Samsung 6GHz GDDR5 as opposed to the Hynix 6GHz GDDR5 they used on the GTX 680. We haven’t seen much of Samsung lately, and in fact the last time we had a product with Samsung GDDR5 cross our path was on the GTX 590. This may or may not be significant, but it’s something to keep in mind for when we’re talking about overclocking.

Elsewhere NVIDIA’s choice of PCIe bridge is a PLX PCIe 3.0 bridge, which is the first time we’ve seen NVIDIA use a 3rd party bridge. With the GTX 590 and earlier dual-GPU cards NVIDIA used their NF200 bridge, which was a PCIe 2.0 capable bridge chip designed by NVIDIA’s chipset group. However as NVIDIA no longer has a chipset group they also no longer have a group to design such chips, and with NF200 now outdated in the face of PCIe 3.0, NVIDIA has turned to PLX to provide a PCI 3.0 bridge chip.

It’s worth noting that because NVIDIA is using a 3rd party PCIe 3.0 bridge here that they’ve opened up PCIe 3.0 support compared to GTX 680. Whereas GTX 680 officially only supported PCIe 3.0 on Ivy Bridge systems – specifically excluding Sandy Bridge-E – NVIDIA is enabling PCIe 3.0 on SNB-E systems thanks to the use of the PLX bridge. So SNB-E system owners won’t need to resort to registry hacks to enable PCIe 3 and there doesn’t appear to be any stability concerns on SNB-E with the PLX bridge. Meanwhile for users with PCIe 2 systems such as SNB, the PLX bridge supports the simultaneous use of PCIe 3.0 and PCIe 2, so regardless of the system used the GK104 GPUs will always be communicating with each other over PCIe 3.0.

Next, let’s talk about external connectivity. On the power side of things the GTX 690 features 2 8pin PCIe power sockets, allowing the card to safely draw up to 375W. The overbuilt power delivery system allows NVIDIA to sell the card as a 300W card while giving it some overclocking headroom for enthusiasts that want to play with the card’s power target. Meanwhile at the front of the card we find the sole SLI connector, which allows for the GTX 690 to be connected to another GTX 690 for quad-SLI.

As for display connectivity NVIDIA is reusing the same port configuration we first saw with the GTX 590. This means 3 DL-DVI ports (2 I-type and 1 D-type) and a mini-DisplayPort for a 4th display. Interestingly NVIDIA is tying the display outputs to both GPUs rather than to a single GPU, and seeing as how NVIDIA still lacks display flexibility on par with AMD, this means that the GTX 690 has display configuration limitations similar to the GTX 590 and GTX 680 SLI. We’ve attached the relevant diagrams below, but in short you can’t use one of the DVI ports for a 4th monitor unless you’re in surround mode. It’s not clear at this time where DisplayPort 1.2’s multiple display capability fits into this, but since the MST hubs are still not available it’s not something that can be used at this time anyhow.

Last but certainly not least we have the luxury aspects of the GTX 690. While the basic design of the GTX 690 resembles the GTX 590, NVIDIA has replaced virtually every bit of plastic with metal for aesthetic/perceptual purposes. The basic shroud is composed of casted aluminum while the fan housing is made out of injection molded magnesium. In fact the only place you’ll find plastics on the shroud is on the polycarbonate windows over the heatsinks, which allows you to see the heatsinks just because.

To be clear the GTX 590 was a solid card and NVIDIA could have just as well used plastic again to no detriment, but the use of metal is definitely a noticeable change. The GTX 690 takes the “solid” concept to a completely different level, and while I have no intention of testing it, you could probably clock someone with the card and cause more damage to them than the GTX 690. Coupled with the return of the LED backlit GeForce logo – this time even larger and in the center of the card – and it’s clear that NVIDIA not only wants buyers to feel like they’ve purchased a solid card, but to be able to show it off in a case with a windowed side panel.

Surprisingly, there’s one place where NVIDIA didn’t put a metal part on the GTX 690 that they did the GTX 590: the back. The GTX 590 shipped with a pair of partial backplates to serve as heatsinks for the RAM on the back of the card, and while the GTX 690 doesn’t have any RAM on its backside thanks to the smaller number of chips required, I’m genuinely surprised NVIDIA didn’t throw in a backplate for the same reason as the metal shroud – just because. Backplates are the scourge of video cards when it comes to placing them directly next to each other because of the space occupied, but with the GTX 690 you need at least 1 free slot anyhow, so this is one of the few times where a backplate wouldn’t get in the way.

NVIDIA's GeForce GTX 690 Overclocking
Comments Locked

200 Comments

View All Comments

  • chadwilson - Thursday, May 3, 2012 - link

    OpenCL by it's very nature is open, it is not an AMD API.
  • CeriseCogburn - Friday, May 4, 2012 - link

    Not after amd gets through with it.
  • silverblue - Friday, May 4, 2012 - link

    We'll see once somebody posts benchmarks of it.
  • CeriseCogburn - Friday, May 11, 2012 - link

    Excuse me but you're wrong, again.
    " by Ryan Smith on Thursday, May 10, 2012
    According to WinZip it only supports AMD GPUs, which is why we're not using it in NVIDIA reviews at this time. "
    Ryan's comment from the 670 release review.
  • chadwilson - Friday, May 4, 2012 - link

    You haven't bothered to do even the most basic research as to who owns OpenCL have you? Perhaps you should visit google before posting hyperbole
  • CeriseCogburn - Saturday, May 5, 2012 - link

    I'm sure the gamer's manifesto amd company "ownz it" now, and also certain it has immediately become all of yours favorite new benchmark you cannot wait to demand be shown here 100% of the time, it's so gaming evolved.
  • CeriseCogburn - Friday, May 11, 2012 - link

    Here's some research mt know it all: " by Ryan Smith on Thursday, May 10, 2012
    According to WinZip it only supports AMD GPUs, which is why we're not using it in NVIDIA reviews at this time. "
    --
    Congratulations on utter FAIL.
  • eman17j - Sunday, August 19, 2012 - link

    look at this website

    http://developer.nvidia(dot)com/cuda/opencl
  • prophet001 - Thursday, May 3, 2012 - link

    First off, thank you for this review. If you didn't do this, we'd have no idea how these GPUs perform in the wild. It is very nice to come here and read a graph and make educated decisions on which card we should purchase. It is appreciated.

    The one thing that I wanted to question is why you feel that you can't recommend the 7970. At the very least perhaps the recommendation of which card to get should be based on the game you're playing.

    Reviewing the data you published, the average frame rates for the 5 top performers over all bench marks are;

    680 SLI 119 fps
    690 GTX 116 fps
    7970 CF 103 fps
    680 GTX 72.9 fps
    7970 65.5 fps

    Also, the number of times which the 7970 dipped below 60 fps in the benchmarks (excluding the minimum frame rate benchmarks) alone, without the 680 doing the same was 4. This is over 29 benchmarks and some of the dips were minimal.

    This aligned with the price considerations makes me wonder why one wouldn't consider the 7970?
  • Ryan Smith - Thursday, May 3, 2012 - link

    "The one thing that I wanted to question is why you feel that you can't recommend the 7970. At the very least perhaps the recommendation of which card to get should be based on the game you're playing."

    Under normal circumstances we would do this. For example GTX 570 vs Raadeon HD 6970 last year; the two traded blows often enough that it came down to the game being played. However the key was that the two were always close.

    In 20% of our games, 7970CF performance is nowhere close to GTX 690 because CF is broken in those games. It would be one thing if AMD's CF scaling in those games was simply weaker, but instead we have no scaling and negative scaling in games that are 5+ months old.

    For single card setups AMD is still fine, but I cannot in good faith recommend CF when it's failing on major games like this. Because you never know what games in the future may end up having the same problem.

Log in

Don't have an account? Sign up now