In an unusual move, NVIDIA took the opportunity earlier this week to announce a new 600 series video card before they would be shipping it. Based on a pair of Kepler GK104 GPUs, the GeForce GTX 690 would be NVIDIA’s new flagship dual-GPU video card. And by all metrics it would be a doozy.

Packing a pair of high clocked, fully enabled GK104 GPUs, NVIDIA was targeting GTX 680 SLI performance in a single card, the kind of dual-GPU card we haven’t seen in quite some time. GTX 690 would be a no compromise card – quieter and less power hungry than GTX 680 SLI, as fast as GTX 680 in single-GPU performance, and as fast as GTX 680 SLI in multi-GPU performance. And at $999 it would be the most expensive GeForce card yet.

After the announcement and based on the specs it was clear that GTX 690 had the potential, but could NVIDIA really pull this off? They could, and they did. Now let’s see how they did it.

  GTX 690 GTX 680 GTX 590 GTX 580
Stream Processors 2 x 1536 1536 2 x 512 512
Texture Units 2 x 128 128 2 x 64 64
ROPs 2 x 32 32 2 x 48 48
Core Clock 915MHz 1006MHz 607MHz 772MHz
Shader Clock N/A N/A 1214MHz 1544MHz
Boost Clock 1019MHz 1058MHz N/A N/A
Memory Clock 6.008GHz GDDR5 6.008GHz GDDR5 3.414GHz GDDR5 4.008GHz GDDR5
Memory Bus Width 2 x 256-bit 256-bit 2 x 384-bit 384-bit
VRAM 2 x 2GB 2GB 2 x 1.5GB 1.5GB
FP64 1/24 FP32 1/24 FP32 1/8 FP32 1/8 FP32
TDP 300W 195W 375W 244W
Transistor Count 2 x 3.5B 3.5B 2 x 3B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 40nm TSMC 40nm
Launch Price $999 $499 $699 $499

As we mentioned earlier this week during the unveiling of the GTX 690, NVIDIA is outright targeting GTX 680 SLI performance here with the GTX 690, unlike what they did with the GTX 590 which was notably slower. As GK104 is a much smaller and less power hungry GPU than GF110 from the get-go, NVIDIA doesn’t have to do nearly as much binning in order to get suitable chips to keep their power consumption in check. The consequence of course is that much like GTX 680, GTX 690 will be a smaller step up than what NVIDIA has done in previous years  (e.g. GTX 295 to GTX 590), as GK104’s smaller size means it isn’t the same kind of massive monster that GF110 was.

In any case, for GTX 690 we’re looking at a base clock of 915MHz, a boost clock of 1019MHz, and a memory clock of 6.006GHz. Compared to the GTX 680 this is 91% of the base clock, 96% of the boost clock, and the same memory bandwidth; this is the closest a dual-GPU NVIDIA card has ever been to its single-GPU counterpart, particularly when it comes to memory bandwidth. Furthermore GTX 690 uses fully enabled GPUs – every last CUDA core and every last ROP is active – so the difference between GTX 690 and GTX 680 is outright the clockspeed difference and nothing more.

Of course this does mean that NVIDIA had to make a clockspeed tradeoff here to get GTX 690 off the ground, but their ace in the hole is going to be GPU Boost, which significantly eats into the clockspeed difference. As we’ll see when we get to our look at performance, in spite of NVIDIA’s conservative base clock the performance difference is frequently closer to the smaller boost clock difference.

As another consequence of using the more petite GK104, NVIDIA’s power consumption has also come down for this product range. Whereas GTX 590 was a 365W TDP product and definitely used most of that power, GTX 690 in its stock configuration takes a step back to 300W. And even that is a worst case scenario, as NVIDIA’s power target for GPU boost of 263W means that power consumption under a number of games (basically anything that has boost headroom) is well below 300W. For the adventurous however the card is overbuilt to the same 365W specification as the GTX 590, which opens up some interesting overclocking opportunities that we’ll get into in a bit.

For these reasons the GTX 690 should (and does) reach performance nearly at parity with the GTX 680 SLI. For that reason NVIDIA has no reason to be shy about pricing and has shot for the moon. The GTX 680 is $499, a pair of GTX 680s in SLI would be $999, and since the GTX 690 is supposed to be a pair of GTX 680s, it too is $999. This makes the GTX 690 the single most expensive consumer video card in the modern era, surpassing even 2008’s GeForce 8800 Ultra. It’s incredibly expensive and that price is going to raise some considerable ire, but as we’ll see when we get to our look at performance NVIDIA has reasonable justification for it – at least if you consider $499 for the GTX 680 reasonable.

Because of its $999 price tag, the GTX 690 has little competition. Besides the GTX 680 in SLI, its only other practical competition is AMD’s Radeon HD 7970 in Crossfire, which at MSRP would be $40 cheaper at $959. We’ve already seen that GTX 680 has clear lead on the 7970, but thanks to differences in Crossfire/SLI scaling that logic will have a wrench thrown in it. But more on that later.

Finally, there’s the elephant in the room: availability. As it stands NVIDIA cannot keep the GTX 680 in stock in North America, and while the GTX 690 may be a very low volume part due to its price, it requires 2 binned GPUs, which are going to be even harder to get. NVIDIA has not disclosed the specific number of cards that will be available for the launch, but after factoring the fact that OEMs will be sharing in this stockpile it’s clear that the retail allocations are certainly going to be small. The best bet for potential buyers is to keep a very close eye on Newegg and other e-tailers, as like the GTX 680 it’s unlikely these cards will stay in stock for long.

The one bit of good news is that while cards will be rare, you won’t need to hunt across many vendors. As with the GTX 590 launch NVIDIA is only using a small number of partners to distribute cards here. For North America this will be EVGA and Asus, and that’s it. So at least unlike the GTX 680 you will only need to watch over two products instead of a dozen. On a broader basis, long term I have no reason to doubt that NVIDIA can produce these cards in sufficient volume when they have plenty of GPUs, but until TSMC’s capacity improves NVIDIA has no chance of meeting the demand for GK104 GPUs or any of the products based off of it.

Spring 2012 GPU Pricing Comparison
AMD Price NVIDIA
  $999 GeForce GTX 690
  $499 GeForce GTX 680
Radeon HD 7970 $479  
Radeon HD 7950 $399 GeForce GTX 580
Radeon HD 7870 $349  
  $299 GeForce GTX 570
Radeon HD 7850 $249  
  $199 GeForce GTX 560 Ti
  $169 GeForce GTX 560
Radeon HD 7770 $139  

 

Meet The GeForce GTX 690
Comments Locked

200 Comments

View All Comments

  • InsaneScientist - Sunday, May 6, 2012 - link

    Or don't...

    It's 2 days later, and you've been active in the comments up through today. Why'd you ignore this one, Cerise?
  • CeriseCogburn - Sunday, May 6, 2012 - link

    Because you idiots aren't worth the time and last review the same silverblue stalker demanded the links to prove my points and he got them, and then never replied.
    It's clear what providing proof does for you people, look at the sudden 100% ownership of 1920x1200 monitors..
    ROFL
    If you want me to waste my time, show a single bit of truth telling on my point on the first page.
    Let's see if you pass the test.
    I'll wait for your reply - you've got a week or so.
  • KompuKare - Thursday, May 3, 2012 - link

    It is indeed sad. AMD comes up with really good hardware features like eyefinity but then never polishes up the drivers properly. Looking some of crossfire results is sad too: in Crysis and BF3 CF scalling is better than SLI (unsure but I think the trifire and quadfire results for those games are even more in AMD's favour), but in Skyrim it seems that CF is totally broken.

    Of course compared to Intel, AMD's drivers are near perfect but with a bit more work they could be better than Nvidia's too rather than being mostly at 95% or so.

    Tellingly, JHH did once say that Nvidia were a software company which was a strange thing for a hardware manufacturer to say. But this also seems to mean that they forgotten the most basic primary thing which all chip designers should know: how to design hardware which works. Yes I'm talking about bumpgate.

    See despite all I said about AMD's drivers, I will never buy Nvidia hardware again after my personal experience of their poor QA. My 8800GT, my brother's 8800GT, this 8400M MXM I had, plus number of laptops plus one nForce motherboard: they all had one thing in common, poorly made chips made by BigGreen and they all died way before they were obsolete.

    Oh, and as pointed out in the Anand VC&G forums earlier today:

    "Well, Nvidia has the title of the worst driver bug in history at this point-
    http://www.zdnet.com/blog/hardware/w...hics-card/7... "

    killing cards with a driver is a record.
  • Filiprino - Thursday, May 3, 2012 - link

    Yep, that's true. They killed cards with a driver. They should implement hardware auto shutdown, like CPUs. As for the nForce, I had one motherboard, the best nForce they made: nForce 2 for AMD Athlon. The rest of mobo chipsets were bullshit, including nForce 680.

    The QA I don't think is NVIDIA's fault but videocard manufacturers.
  • KompuKare - Thursday, May 3, 2012 - link


    The QA I don't think is NVIDIA's fault but videocard manufacturers.


    No, 100% Nvidia's fault. Although maybe QA isn't the right word. I was referring to Nvidia using the wrong solder underfil for a few million chips (the exact number is unknown): they were mainly mobile parts and Nvidia had to put $250 million aside to settle a class action.

    http://en.wikipedia.org/wiki/GeForce_8_Series#Prob...

    Although that wiki article is rather lenient towards Nvidia since that bit about fan speeds is red herring: more accurately it was Nvidia which spec'ed their chips to a certain temperature and designs which run way below that will have put less stress on the solder but to say it was poor OEM and AIB design which lead to the problem is not correct. Anyway, the proper expose was by Charlie D. in the Inquirer and later SemiAccurate
  • CeriseCogburn - Friday, May 4, 2012 - link

    But in fact it was a bad heatsink design, thank HP, and view the thousands of heatsink repairs, including the "add a copper penny" method to reduce the giant gap between the HS and the NV chip.
    Charlie was wrong, a liar, again, as usual.
  • KompuKare - Friday, May 4, 2012 - link

    Don't be silly. While HP's DV6000s were the most notorious failures and that was due to HP's poorly designed heatsink / cooling bumpgate also saw Dells, Apples and others:

    http://www.electronista.com/articles/10/09/29/suit...
    http://www.nvidiadefect.com/nvidia-settlement-t874...

    The problem was real, continues to be real and also affects G92 desktop parts and certain nForce chipsets like the 7150.

    Yes, the penny shim trick will fix it for a while but if you actually were to read up on technicians forums who fix laptops, that plus reflows are only a temporary fix because the actual chips are flawed. Re-balling with new, better solder is a better solution but not many offer those fixes since it involves 100s of tiny solder balls per chip.

    Before blindly leaping to Nvidia's defence like a fanboy, please do some research!
  • CeriseCogburn - Saturday, May 5, 2012 - link

    Before blindly taking the big lie from years ago repeated above to attack nvidia for no reason at all other than all you have is years old misinformation, then wail on about it, while telling someone else some more lies about it, check your own immense bias and lack of knowledge, since I had to point out the truth for you to find, and you forgot DV9000, dv2000 and dell systems with poor HS design, let alone apple amd console video chip failings, and the fact that payment was made and restitution was delivered, which you also did not mention, because of your fanboy problems, obviously in amd's favor.
  • Ashkal - Thursday, May 3, 2012 - link

    In price comparison in Final words you are not referring with AMD products. I think AMD is better in price performance ratio.
  • prophet001 - Thursday, May 3, 2012 - link

    I agree

Log in

Don't have an account? Sign up now