Final Words

Traditionally dual-GPU cards have been a mixed bag. More often than not they have to sacrifice a significant amount of single-GPU performance in order to put two GPUs on a single card, and in the rare occasions where that tradeoff doesn’t happen there’s some other tradeoff such as a loud cooler or immense power consumption. NVIDIA told us that they could break this tradition and put two full GTX 680s on a single card, and that they could do that while making it quieter and less power consuming than a dual video card SLI setup. After going through our benchmarking process we can safely say that NVIDIA has met their goals.

From a gaming performance perspective we haven’t seen a dual-GPU card reach the performance of a pair of high-end cards in SLI/CF since the Radeon HD 4870X2 in 2008, so it’s quite refreshing to see someone get so close again 4 years later. The GTX 690 doesn’t quite reach the performance of the GTX 680 SLI, but it’s very, very close. Based on our benchmarks we’re looking at 95% of the performance of the GTX 680 SLI at 5760x1200 and 96% of the performance at 2560x1600. These are measurable differences, but only just. For all practical purposes the GTX 690 is a single card GTX 680 SLI – a single card GTX 680 SLI that consumes noticeably less power under load and is at least marginally quieter too.

With that said, this would typically be the part of the review where we would inject a well-placed recap of the potential downsides of multi-GPU technology; but in this case there’s really no need. Unlike the GTX 590 and unlike the GTX 295 NVIDIA is not making a performance tradeoff here compared to their single-GPU flagship card. When SLI works the GTX 690 is the fastest card out there, and when SLI doesn’t work the GTX 690 is still the fastest card out there. For the first time in a long time using a dual-GPU card doesn’t mean sacrificing single-GPU performance, and that’s a game changer.

At this point in time NVIDIA offers two different but compelling solutions for ultra-enthusiast performance; the GTX 690 and GTX 680 SLI, and they complement each other well. For most situations the GTX 690 is going to be the way to go thanks to its lower power consumption and lower noise levels, but for cases that need fully exhausting video cards the GTX 680 SLI can offer the same gaming performance at the same price. Unfortunately we’re going to have to put AMD out of the running here; as we’ve seen in games like Crysis and Metro the 7970 in Crossfire has a great deal of potential, but as it stands Crossfire is simply too broken overall to recommend.

The only real question I suppose is simply this: is the GTX 690 worthy of its $999 price tag? I don’t believe there’s any argument to be had with respect to whether the GTX 690 is worth getting over the GTX 680 SLI, as we’ve clearly answered that above. As a $999 card it doesn’t double the performance of the $499 GTX 680, but SLI has never offered quite that much of a performance boost. However at the same time SLI has almost always been good enough to justify the cost of another GPU if you must have performance better than what the fastest single GPU can provide, and this is one of those times.

Is $999 expensive? Absolutely. Is it worth it? If you’re gaming at 2560x1600 or 5760x1200, the GTX 690 is at least worth the consideration. You can certainly get by on less, but if you want 60fps or better and you want it with the same kind of ultra high quality single GPU cards can already deliver at 1920x1080, then you can’t do any better than the GTX 690.

Wrapping things up, there is one question left I feel like we still don’t have a good answer to: how much RAM a $999 card should have. NVIDIA went with a true equal for the GTX 680 SLI, right down to the 2GB of VRAM per GPU. Looking back at what happened to the Radeon HD 5970 and its 1GB of VRAM per GPU – we can’t even run our 5760x1200 benchmarks on it, let alone a couple of 2560x1600 benchmarks – I’m left uneasy. None of our benchmarks today seem to require more than 2GB of VRAM, but that much VRAM has been common in high-end cards since late 2010; the day will come when 2GB isn’t enough, and I'm left to wonder when. A GTX 690 with 4GB of VRAM per GPU would be practically future-proof, but with 2GB of VRAM NVIDIA is going to be cutting it close.

Overclocked: Gaming Performance
Comments Locked

200 Comments

View All Comments

  • von Krupp - Saturday, May 5, 2012 - link

    Not precisely. That $350 performance point? It used to be a $200 performance point. Similarly, that $350 point will turn into a $400 performance point. So, assuming I maintain the price tier, graphics returns for my dollar are gradually tapering off. I look at the performance I was getting out of my 7800 GT at 1280x1024, and it wasn't worth upgrading to a newer card, period, because of Windows XP, my single core CPU, and the fact that I was already maxing out every game I had and still getting decent frame rates. I think they key factor is that I do not care if I dip below 60 frames, as long as I'm above 30 and getting reasonable frame times.

    I also know that consoles extend the life of PC hardware. The 7800GT is a 20-pipe version of the GTX, which is in turn the GPU found in the PS3.Devs have gotten much better at optimization in titles that matter to me.
  • CeriseCogburn - Saturday, May 5, 2012 - link

    You spend well over $1,600 on a decent system.
    It makes no sense to spend all that money, then buy monitors the cards in question cannot successfully drive on 3 year old Crysis game, let alone well over half the benchmarks in this article set without turning DOWN the settings.
    You cannot turn up DX11 tesselation, keep it on medium.
    You cannot turn up MSAA past 4X, and better keep it at 2X.
    You had better turn down your visual distance in game.
    That in fact, with "all the console ports" moanings "holding us back".
    I get it, the obvious problem is none of you seem to, because you want to moan and pretend spending $1,000.00 on a monitor alone, or more, is "how it's done", because you whine you cannot even afford $500 for a single video card.
    These cards successfully drive 1920X1080 monitors in the benchmarks, but just barely - and if you turn the eye candy up, they cannot do it.
  • CeriseCogburn - Saturday, May 5, 2012 - link

    Thanks for telling everyone how correct I am by doing a pure 100% troll attack after you and yours could not avoid the facts.
    Your mommy, if you knew who she was, must be very disappointed.
  • geok1ng - Sunday, May 6, 2012 - link

    This card was not build for 2560x1600 gaming. a single 680 is more than enough for that.
    The 690 was built for 5760x1200 gaming.

    I would like to see triple 30" tests. Nothing like gaming at 7680x1600 to feel that you are spending well your VGA money.
  • CeriseCogburn - Sunday, May 6, 2012 - link

    You can use cards 2 generations back for that, but like these cards, you will be turning down most and near all of the eye candy, and be stuck rweaking and clocking, and jittering and wishing you had more power.
    These cards cannot handle 1920X at current "console port" games unless you turn them down, and that goes ESPECIALLY for the AMD cards that suck at extreme tesselation and have more issues with anything above 4XAA, and often 4XAA.
    The 5770 is an eyefinity card and runs 5760X1200 too.
    I guess none of you will ever know until you try it, and it appears none of you have spent the money and become disappointed turning down the eye candy settings - so blabbering about resolutions is all you have left.
  • _vor_ - Tuesday, May 8, 2012 - link

    "... blabbering..."

    Pot, meet kettle.
  • CeriseCogburn - Sunday, May 6, 2012 - link

    They cost $400 to $2,000 plus, not $150 like the 242 1080p.
    Thanks for playing.
  • hechacker1 - Monday, May 7, 2012 - link

    Nope, you can already get IPS, 27", 2560x1440 panels (the same that Apple uses) for $400.

    They're rare, but currently they are building them in batches of 1000 to see how strong demand is for them.

    Sure the 120Hz will sort of go to waste due to the slow IPS switching speed, but it will accept that signal with 0 input lag.

    The only problem is that only the 680 seems to have a ramdac fast enough to do 120Hz. Radeon's tend to cap out at 85Hz.
  • marine73 - Monday, May 7, 2012 - link

    After checking Newegg it would seem that, unfortunately for Nvidia, this will be another piece of vaporware. Perhaps they should scale the Kepler's to 22nm and contract Intel to fab them since TSMC has major issues with 28nm. Just a thought.
  • marine73 - Monday, May 7, 2012 - link

    I guess I should retract my comments about TSMC as other customers are not experiencing supply issues with 28nm parts. Apparently the issues are with Nvidia's design, which may require another redo. I'm guessing AMD will be out with their 8000 series before Nvidia gets their act together. Sad because I have used several generations of Nvidia cards and was always happy with them.

Log in

Don't have an account? Sign up now