Meet The GeForce GTX 690

Much like the GTX 680 launch and the GTX 590 before it, the first generation of GTX 690 cards are reference boards being built by NVIDIA, with NVIDIA using their partners for distribution and support. In fact NVIDIA is enforcing some pretty strict standards on their partners to maintain a consistent image of the GTX 690 – not only will all of the launch cards be based off of NVIDIA’s reference design, but NVIDIA’s partners will be severely restricted in how they can dress up their cards, with stickers not being allowed anywhere on the shroud. Partners will only be able to put their mark on PCB, meaning the bottom and the rear of the card. In the future we’d expect to see NVIDIA’s partners do some customizing through waterblocks and such, but for the most part this will be the face of the GTX 690 throughout its entire run.

And with that said, what a pretty face it is.

Let’s get this clear right off the bat: the GTX 690 is truly a luxury video card. If the $1000 price tag didn’t sell that point, NVIDIA’s design choices will. There are a lot of design choices based on technical reasons, but at the same time NVIDIA has gone out of their way to build the GTX 690 out of metals instead of plastics not for major performance or quality reasons, but rather just because they can. The GTX 690 is a luxury video card and NVIDIA intends to make that fact unmistakable.

But before we get too far ahead of ourselves, let’s talk about basic design. At its most basic level, the GTX 690 is a reuse of the design principles of the GTX 590. With the exception of perhaps overclocking, the GTX 590 was a well-designed card that greatly improved on the design of past NVIDIA dual-GPU cards and managed to dissipate 365W of heat without sounding like a small hurricane. Since the GTX 690 is designed around the same power constraints and at the same time is a bit simpler in some regards – the GPUs are smaller and the memory busses narrower – NVIDIA has opted to reuse the GTX 590’s basic design.

The reuse of the GTX 590’s design means that the GTX 690 is a 10” long card with a double-wide cooler, making it the same size as the single-GPU GTX 680. The basis of the GTX 690’s cooler is single axial fan sitting at the center of the card, with a GPU and its RAM at either side. Heat from one GPU goes out the rear of the card, while the heat from the other GPU goes out the front. Heat transfer will once again be provided by a pair of nickel tipped aluminum heatsinks attached to vapor chambers, which also marks the first time we’ve seen a vapor chamber used with a 600 series card. Meanwhile a metal baseplate runs along the card at the same height as the top of the GPUs, not only providing structural rigidity but also providing cooling for the VRMs and RAM.

Compared to the GTX 590 NVIDIA has made a couple of minor tweaks however. The first is that NVIDIA has moved the baseplate a bit higher on the GTX 690 so that it covers all of the components other than the GPU, so that those components don’t need to stick through the baseplate. The idea here is that turbulence is reduced as airflow doesn’t need to deal with those obstructions, instead being generally driven by small channels in the baseplate. The second change is that NVIDIA has rearranged the I/O port configuration so that the stacked DVI connector is moved to the very bottom of the bracket rather than being in roughly the middle, maximizing just how much space is available for venting hot air out of the front of the card. In practice these aren’t huge differences – our test results don’t find the GTX 690 to be significantly quieter than the GTX 590 under gaming loads – but every bit helps.


Top: GTX 590. Bottom: GTX 690

Of course this design means that you absolutely need an airy case – you’re effectively dissipating 150W to 170W not just into your case, but straight towards the front of your case. As we saw with the GTX 590 and the Radeon HD 6990 this has a detrimental effect on anything that may be directly behind the video card, which for most cases is going to be the HDD cage. As we did with the GTX 590, we took some quick temperature readings with a hard drive positioned directly behind the GTX 690 in order to get an idea of the impact of exhausting hot air in this fashion.

Seagate 500GB Hard Drive Temperatures
Video Card Temperature
GeForce GTX 690 38C
Radeon HD 7970 28C
GeForce GTX 680 27C
GeForce GTX 590 42C
Radeon HD 6990 37C
Radeon HD 5970 31C

Unsurprisingly the end result is very similar to the GTX 590. The temperature increase is reduced some thanks to the lower TDP of the card, but we’re still driving up the temperature of our HDD by over 10C. This is still well within the safety range of a HDD and in principle should work, but our best advice to GTX 690 buyers is to keep any drive bays directly behind the GTX 690 clear, just in case. That’s the tradeoff for making it quieter and capable of dissipating more heat than older blower designs.

Moving on, let’s talk about the technical details of the GTX 590. GPU power is supplied by 10 VRM phases, divided up into 5 phases per GPU. Like many other aspects of the GTX 690 this is the same basic design as the GTX 590, which means that it should be enough to push up to 365W but it’s no more designed for overvolting than the GTX 590 was. Any overclocking potential with the GTX 690 will be based on the fact that the card’s default configuration is for 300W, allowing for some liberal adjustment of the power target.

Meanwhile the RAM on the GTX 690 is an interesting choice. NVIDIA is using Samsung 6GHz GDDR5 as opposed to the Hynix 6GHz GDDR5 they used on the GTX 680. We haven’t seen much of Samsung lately, and in fact the last time we had a product with Samsung GDDR5 cross our path was on the GTX 590. This may or may not be significant, but it’s something to keep in mind for when we’re talking about overclocking.

Elsewhere NVIDIA’s choice of PCIe bridge is a PLX PCIe 3.0 bridge, which is the first time we’ve seen NVIDIA use a 3rd party bridge. With the GTX 590 and earlier dual-GPU cards NVIDIA used their NF200 bridge, which was a PCIe 2.0 capable bridge chip designed by NVIDIA’s chipset group. However as NVIDIA no longer has a chipset group they also no longer have a group to design such chips, and with NF200 now outdated in the face of PCIe 3.0, NVIDIA has turned to PLX to provide a PCI 3.0 bridge chip.

It’s worth noting that because NVIDIA is using a 3rd party PCIe 3.0 bridge here that they’ve opened up PCIe 3.0 support compared to GTX 680. Whereas GTX 680 officially only supported PCIe 3.0 on Ivy Bridge systems – specifically excluding Sandy Bridge-E – NVIDIA is enabling PCIe 3.0 on SNB-E systems thanks to the use of the PLX bridge. So SNB-E system owners won’t need to resort to registry hacks to enable PCIe 3 and there doesn’t appear to be any stability concerns on SNB-E with the PLX bridge. Meanwhile for users with PCIe 2 systems such as SNB, the PLX bridge supports the simultaneous use of PCIe 3.0 and PCIe 2, so regardless of the system used the GK104 GPUs will always be communicating with each other over PCIe 3.0.

Next, let’s talk about external connectivity. On the power side of things the GTX 690 features 2 8pin PCIe power sockets, allowing the card to safely draw up to 375W. The overbuilt power delivery system allows NVIDIA to sell the card as a 300W card while giving it some overclocking headroom for enthusiasts that want to play with the card’s power target. Meanwhile at the front of the card we find the sole SLI connector, which allows for the GTX 690 to be connected to another GTX 690 for quad-SLI.

As for display connectivity NVIDIA is reusing the same port configuration we first saw with the GTX 590. This means 3 DL-DVI ports (2 I-type and 1 D-type) and a mini-DisplayPort for a 4th display. Interestingly NVIDIA is tying the display outputs to both GPUs rather than to a single GPU, and seeing as how NVIDIA still lacks display flexibility on par with AMD, this means that the GTX 690 has display configuration limitations similar to the GTX 590 and GTX 680 SLI. We’ve attached the relevant diagrams below, but in short you can’t use one of the DVI ports for a 4th monitor unless you’re in surround mode. It’s not clear at this time where DisplayPort 1.2’s multiple display capability fits into this, but since the MST hubs are still not available it’s not something that can be used at this time anyhow.

Last but certainly not least we have the luxury aspects of the GTX 690. While the basic design of the GTX 690 resembles the GTX 590, NVIDIA has replaced virtually every bit of plastic with metal for aesthetic/perceptual purposes. The basic shroud is composed of casted aluminum while the fan housing is made out of injection molded magnesium. In fact the only place you’ll find plastics on the shroud is on the polycarbonate windows over the heatsinks, which allows you to see the heatsinks just because.

To be clear the GTX 590 was a solid card and NVIDIA could have just as well used plastic again to no detriment, but the use of metal is definitely a noticeable change. The GTX 690 takes the “solid” concept to a completely different level, and while I have no intention of testing it, you could probably clock someone with the card and cause more damage to them than the GTX 690. Coupled with the return of the LED backlit GeForce logo – this time even larger and in the center of the card – and it’s clear that NVIDIA not only wants buyers to feel like they’ve purchased a solid card, but to be able to show it off in a case with a windowed side panel.

Surprisingly, there’s one place where NVIDIA didn’t put a metal part on the GTX 690 that they did the GTX 590: the back. The GTX 590 shipped with a pair of partial backplates to serve as heatsinks for the RAM on the back of the card, and while the GTX 690 doesn’t have any RAM on its backside thanks to the smaller number of chips required, I’m genuinely surprised NVIDIA didn’t throw in a backplate for the same reason as the metal shroud – just because. Backplates are the scourge of video cards when it comes to placing them directly next to each other because of the space occupied, but with the GTX 690 you need at least 1 free slot anyhow, so this is one of the few times where a backplate wouldn’t get in the way.

NVIDIA's GeForce GTX 690 Overclocking
Comments Locked

200 Comments

View All Comments

  • james.jwb - Thursday, May 3, 2012 - link

    You are correct, I don't own one... I own three in triple screen. Dell U2412m's.

    I really am at a loss as to what you are on about. It is well known that 16:10 is preferred amongst enthusiasts and professionals for a few reasons. If you want 16:9, fine, go for it, but don't act like it's weird that AT are benching with 16:10 just because you went with cheap ass 16:9 screens.
  • CeriseCogburn - Friday, May 4, 2012 - link

    Yes of course you are at a loss, you don't understand a word so why reply ?
    You're all at a loss.
    ROFL
  • yelnatsch517 - Friday, May 4, 2012 - link

    Are you being sarcastic or an idiot?
    From my experience 1900x1200 24" monitors are the MAJORITY. My work has roughly 50 24" monitors all in that resolution. My HP ZR24W is 1900x1200 as well. The only 24" monitor that I have even seen is the TN panel that came with an HP computer.

    If you are talking about computer monitors, 1900x1200 is the dominant resolution. If you are talking about TVs, then obviously 1080p is the norm.
  • CeriseCogburn - Saturday, May 5, 2012 - link

    There are 242 - count them, well over 200, nearly 250 1920X1080 monitors at the egg.
    _
    In your great experience, there are 16 that fit your 1920X1200 dreampipe FUD scenario at the egg, with most of them, well over half, over $400 each, while the 242 common monitors you all buy as you pinch every penny and whine about $10 difference in videocard prices are well under $200 each a lot of the time.
    So now suddenly, you all spend way over $300 to plus $400 for 11% more pixels... ROFL HAHAHHAHHA instead of $150 or $200...
    I guess that's why this place is so biased, the little bloggers are just as whacked when it comes to being honest.
  • InsaneScientist - Saturday, May 5, 2012 - link

    Good grief... resorting to personal attacks isn't exactly a good way to get people to listen to you.

    I'm not going to argue that 1080p isn't more common (from what I've read, no one is), because it is more common, you are quite correct there, however I must point out that your logic to arrive at that conclusion is faulty:
    You're contending that 1080p is more common (it is) because there are more models available on Newegg, but just knowing how many models are available doesn't tell us how many units those move.
    If, for example, each of those 22 models of 1920x1200 monitors moves 10 times as much stock as each of the 1920x1080, nearly as many of the 1920x1200 will have been sold as the 1920x1080 ones.
    Now, I don't think that's likely, and I do agree with you that 1080p is more common nowadays (see next point), but your argument is invalid, even though you have come to the correct conclusion.
    Consider this: there are currently two models of iPhone available, compared to dozens of Android phones. By the same logic as you're using, I could say that the iPhone is incredibly rare - I'd get laughed out of town if I tried to make that argument.

    The second point is that 1920x1200 hasn't been nearly as rare in the past as it is today. When I bought my previous monitor and my laptop (both 1920x1200), 1080p monitors were almost unheard of. Since monitors tend to last a while, it's not at all unreasonable for a disproportionate amount of people to be using them compared to their current sales.

    Thirdly, there is a point of diminishing returns. Notice the complete lack of any benchmarks at or below 1680x1050? These cards are so fast that comparisons at those resolutions are pointless - they're all fast enough for anything you could do to them at that screen res - even Crysis. 1920x1080 almost falls into that category, heck, even 1920x1200 almost falls into that category. Benchmarks are only about who wins if there is some advantage to winning. Below 2560x1600, which card you're using is almost completely irrelevant, so why does it even matter whether they used 1920x1080 or 1920x1200?
  • CeriseCogburn - Tuesday, May 8, 2012 - link

    Blah blah blah blah and I'm still 100% correct and you are not at all.
  • Decembermouse - Tuesday, May 8, 2012 - link

    You're quite a character.
  • anirudhs - Thursday, May 3, 2012 - link

    I use 2 at work - HP ZR24W.
  • piroroadkill - Sunday, May 6, 2012 - link

    Hm, odd.
    Not only do I have 1920x1200 monitor on my desktop, I have TWO laptops with 1920x1200 screens. Using one right now.
    Yes, they're rarer than 1080p screens, but this is a site for enthusiasts, therefore, it is more likely.
  • Ryan Smith - Thursday, May 3, 2012 - link

    The truth is a bit more simple than that. 5760x1200 is because our choice in monitors for multi-monitor testing was based on my personal monitor, which is another PA246Q. NVIDIA's limited display flexibility (same res + same sync) meant that it was easiest to just pair the PA246Q with some more PA246Qs, Consequently it's easier to just test these monitors at their native resolution when we're using NVIDIA cards.

Log in

Don't have an account? Sign up now