Original Link: http://www.anandtech.com/show/5805/nvidia-geforce-gtx-690-review-ultra-expensive-ultra-rare-ultra-fast
NVIDIA GeForce GTX 690 Review: Ultra Expensive, Ultra Rare, Ultra Fastby Ryan Smith on May 3, 2012 9:00 AM EST
In an unusual move, NVIDIA took the opportunity earlier this week to announce a new 600 series video card before they would be shipping it. Based on a pair of Kepler GK104 GPUs, the GeForce GTX 690 would be NVIDIA’s new flagship dual-GPU video card. And by all metrics it would be a doozy.
Packing a pair of high clocked, fully enabled GK104 GPUs, NVIDIA was targeting GTX 680 SLI performance in a single card, the kind of dual-GPU card we haven’t seen in quite some time. GTX 690 would be a no compromise card – quieter and less power hungry than GTX 680 SLI, as fast as GTX 680 in single-GPU performance, and as fast as GTX 680 SLI in multi-GPU performance. And at $999 it would be the most expensive GeForce card yet.
After the announcement and based on the specs it was clear that GTX 690 had the potential, but could NVIDIA really pull this off? They could, and they did. Now let’s see how they did it.
|GTX 690||GTX 680||GTX 590||GTX 580|
|Stream Processors||2 x 1536||1536||2 x 512||512|
|Texture Units||2 x 128||128||2 x 64||64|
|ROPs||2 x 32||32||2 x 48||48|
|Memory Clock||6.008GHz GDDR5||6.008GHz GDDR5||3.414GHz GDDR5||4.008GHz GDDR5|
|Memory Bus Width||2 x 256-bit||256-bit||2 x 384-bit||384-bit|
|VRAM||2 x 2GB||2GB||2 x 1.5GB||1.5GB|
|FP64||1/24 FP32||1/24 FP32||1/8 FP32||1/8 FP32|
|Transistor Count||2 x 3.5B||3.5B||2 x 3B||3B|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 40nm||TSMC 40nm|
As we mentioned earlier this week during the unveiling of the GTX 690, NVIDIA is outright targeting GTX 680 SLI performance here with the GTX 690, unlike what they did with the GTX 590 which was notably slower. As GK104 is a much smaller and less power hungry GPU than GF110 from the get-go, NVIDIA doesn’t have to do nearly as much binning in order to get suitable chips to keep their power consumption in check. The consequence of course is that much like GTX 680, GTX 690 will be a smaller step up than what NVIDIA has done in previous years (e.g. GTX 295 to GTX 590), as GK104’s smaller size means it isn’t the same kind of massive monster that GF110 was.
In any case, for GTX 690 we’re looking at a base clock of 915MHz, a boost clock of 1019MHz, and a memory clock of 6.006GHz. Compared to the GTX 680 this is 91% of the base clock, 96% of the boost clock, and the same memory bandwidth; this is the closest a dual-GPU NVIDIA card has ever been to its single-GPU counterpart, particularly when it comes to memory bandwidth. Furthermore GTX 690 uses fully enabled GPUs – every last CUDA core and every last ROP is active – so the difference between GTX 690 and GTX 680 is outright the clockspeed difference and nothing more.
Of course this does mean that NVIDIA had to make a clockspeed tradeoff here to get GTX 690 off the ground, but their ace in the hole is going to be GPU Boost, which significantly eats into the clockspeed difference. As we’ll see when we get to our look at performance, in spite of NVIDIA’s conservative base clock the performance difference is frequently closer to the smaller boost clock difference.
As another consequence of using the more petite GK104, NVIDIA’s power consumption has also come down for this product range. Whereas GTX 590 was a 365W TDP product and definitely used most of that power, GTX 690 in its stock configuration takes a step back to 300W. And even that is a worst case scenario, as NVIDIA’s power target for GPU boost of 263W means that power consumption under a number of games (basically anything that has boost headroom) is well below 300W. For the adventurous however the card is overbuilt to the same 365W specification as the GTX 590, which opens up some interesting overclocking opportunities that we’ll get into in a bit.
For these reasons the GTX 690 should (and does) reach performance nearly at parity with the GTX 680 SLI. For that reason NVIDIA has no reason to be shy about pricing and has shot for the moon. The GTX 680 is $499, a pair of GTX 680s in SLI would be $999, and since the GTX 690 is supposed to be a pair of GTX 680s, it too is $999. This makes the GTX 690 the single most expensive consumer video card in the modern era, surpassing even 2008’s GeForce 8800 Ultra. It’s incredibly expensive and that price is going to raise some considerable ire, but as we’ll see when we get to our look at performance NVIDIA has reasonable justification for it – at least if you consider $499 for the GTX 680 reasonable.
Because of its $999 price tag, the GTX 690 has little competition. Besides the GTX 680 in SLI, its only other practical competition is AMD’s Radeon HD 7970 in Crossfire, which at MSRP would be $40 cheaper at $959. We’ve already seen that GTX 680 has clear lead on the 7970, but thanks to differences in Crossfire/SLI scaling that logic will have a wrench thrown in it. But more on that later.
Finally, there’s the elephant in the room: availability. As it stands NVIDIA cannot keep the GTX 680 in stock in North America, and while the GTX 690 may be a very low volume part due to its price, it requires 2 binned GPUs, which are going to be even harder to get. NVIDIA has not disclosed the specific number of cards that will be available for the launch, but after factoring the fact that OEMs will be sharing in this stockpile it’s clear that the retail allocations are certainly going to be small. The best bet for potential buyers is to keep a very close eye on Newegg and other e-tailers, as like the GTX 680 it’s unlikely these cards will stay in stock for long.
The one bit of good news is that while cards will be rare, you won’t need to hunt across many vendors. As with the GTX 590 launch NVIDIA is only using a small number of partners to distribute cards here. For North America this will be EVGA and Asus, and that’s it. So at least unlike the GTX 680 you will only need to watch over two products instead of a dozen. On a broader basis, long term I have no reason to doubt that NVIDIA can produce these cards in sufficient volume when they have plenty of GPUs, but until TSMC’s capacity improves NVIDIA has no chance of meeting the demand for GK104 GPUs or any of the products based off of it.
|Spring 2012 GPU Pricing Comparison|
|$999||GeForce GTX 690|
|$499||GeForce GTX 680|
|Radeon HD 7970||$479|
|Radeon HD 7950||$399||GeForce GTX 580|
|Radeon HD 7870||$349|
|$299||GeForce GTX 570|
|Radeon HD 7850||$249|
|$199||GeForce GTX 560 Ti|
|$169||GeForce GTX 560|
|Radeon HD 7770||$139|
Meet The GeForce GTX 690
Much like the GTX 680 launch and the GTX 590 before it, the first generation of GTX 690 cards are reference boards being built by NVIDIA, with NVIDIA using their partners for distribution and support. In fact NVIDIA is enforcing some pretty strict standards on their partners to maintain a consistent image of the GTX 690 – not only will all of the launch cards be based off of NVIDIA’s reference design, but NVIDIA’s partners will be severely restricted in how they can dress up their cards, with stickers not being allowed anywhere on the shroud. Partners will only be able to put their mark on PCB, meaning the bottom and the rear of the card. In the future we’d expect to see NVIDIA’s partners do some customizing through waterblocks and such, but for the most part this will be the face of the GTX 690 throughout its entire run.
And with that said, what a pretty face it is.
Let’s get this clear right off the bat: the GTX 690 is truly a luxury video card. If the $1000 price tag didn’t sell that point, NVIDIA’s design choices will. There are a lot of design choices based on technical reasons, but at the same time NVIDIA has gone out of their way to build the GTX 690 out of metals instead of plastics not for major performance or quality reasons, but rather just because they can. The GTX 690 is a luxury video card and NVIDIA intends to make that fact unmistakable.
But before we get too far ahead of ourselves, let’s talk about basic design. At its most basic level, the GTX 690 is a reuse of the design principles of the GTX 590. With the exception of perhaps overclocking, the GTX 590 was a well-designed card that greatly improved on the design of past NVIDIA dual-GPU cards and managed to dissipate 365W of heat without sounding like a small hurricane. Since the GTX 690 is designed around the same power constraints and at the same time is a bit simpler in some regards – the GPUs are smaller and the memory busses narrower – NVIDIA has opted to reuse the GTX 590’s basic design.
The reuse of the GTX 590’s design means that the GTX 690 is a 10” long card with a double-wide cooler, making it the same size as the single-GPU GTX 680. The basis of the GTX 690’s cooler is single axial fan sitting at the center of the card, with a GPU and its RAM at either side. Heat from one GPU goes out the rear of the card, while the heat from the other GPU goes out the front. Heat transfer will once again be provided by a pair of nickel tipped aluminum heatsinks attached to vapor chambers, which also marks the first time we’ve seen a vapor chamber used with a 600 series card. Meanwhile a metal baseplate runs along the card at the same height as the top of the GPUs, not only providing structural rigidity but also providing cooling for the VRMs and RAM.
Compared to the GTX 590 NVIDIA has made a couple of minor tweaks however. The first is that NVIDIA has moved the baseplate a bit higher on the GTX 690 so that it covers all of the components other than the GPU, so that those components don’t need to stick through the baseplate. The idea here is that turbulence is reduced as airflow doesn’t need to deal with those obstructions, instead being generally driven by small channels in the baseplate. The second change is that NVIDIA has rearranged the I/O port configuration so that the stacked DVI connector is moved to the very bottom of the bracket rather than being in roughly the middle, maximizing just how much space is available for venting hot air out of the front of the card. In practice these aren’t huge differences – our test results don’t find the GTX 690 to be significantly quieter than the GTX 590 under gaming loads – but every bit helps.
Of course this design means that you absolutely need an airy case – you’re effectively dissipating 150W to 170W not just into your case, but straight towards the front of your case. As we saw with the GTX 590 and the Radeon HD 6990 this has a detrimental effect on anything that may be directly behind the video card, which for most cases is going to be the HDD cage. As we did with the GTX 590, we took some quick temperature readings with a hard drive positioned directly behind the GTX 690 in order to get an idea of the impact of exhausting hot air in this fashion.
|Seagate 500GB Hard Drive Temperatures|
|GeForce GTX 690||38C|
|Radeon HD 7970||28C|
|GeForce GTX 680||27C|
|GeForce GTX 590||42C|
|Radeon HD 6990||37C|
|Radeon HD 5970||31C|
Unsurprisingly the end result is very similar to the GTX 590. The temperature increase is reduced some thanks to the lower TDP of the card, but we’re still driving up the temperature of our HDD by over 10C. This is still well within the safety range of a HDD and in principle should work, but our best advice to GTX 690 buyers is to keep any drive bays directly behind the GTX 690 clear, just in case. That’s the tradeoff for making it quieter and capable of dissipating more heat than older blower designs.
Moving on, let’s talk about the technical details of the GTX 590. GPU power is supplied by 10 VRM phases, divided up into 5 phases per GPU. Like many other aspects of the GTX 690 this is the same basic design as the GTX 590, which means that it should be enough to push up to 365W but it’s no more designed for overvolting than the GTX 590 was. Any overclocking potential with the GTX 690 will be based on the fact that the card’s default configuration is for 300W, allowing for some liberal adjustment of the power target.
Meanwhile the RAM on the GTX 690 is an interesting choice. NVIDIA is using Samsung 6GHz GDDR5 as opposed to the Hynix 6GHz GDDR5 they used on the GTX 680. We haven’t seen much of Samsung lately, and in fact the last time we had a product with Samsung GDDR5 cross our path was on the GTX 590. This may or may not be significant, but it’s something to keep in mind for when we’re talking about overclocking.
Elsewhere NVIDIA’s choice of PCIe bridge is a PLX PCIe 3.0 bridge, which is the first time we’ve seen NVIDIA use a 3rd party bridge. With the GTX 590 and earlier dual-GPU cards NVIDIA used their NF200 bridge, which was a PCIe 2.0 capable bridge chip designed by NVIDIA’s chipset group. However as NVIDIA no longer has a chipset group they also no longer have a group to design such chips, and with NF200 now outdated in the face of PCIe 3.0, NVIDIA has turned to PLX to provide a PCI 3.0 bridge chip.
It’s worth noting that because NVIDIA is using a 3rd party PCIe 3.0 bridge here that they’ve opened up PCIe 3.0 support compared to GTX 680. Whereas GTX 680 officially only supported PCIe 3.0 on Ivy Bridge systems – specifically excluding Sandy Bridge-E – NVIDIA is enabling PCIe 3.0 on SNB-E systems thanks to the use of the PLX bridge. So SNB-E system owners won’t need to resort to registry hacks to enable PCIe 3 and there doesn’t appear to be any stability concerns on SNB-E with the PLX bridge. Meanwhile for users with PCIe 2 systems such as SNB, the PLX bridge supports the simultaneous use of PCIe 3.0 and PCIe 2, so regardless of the system used the GK104 GPUs will always be communicating with each other over PCIe 3.0.
Next, let’s talk about external connectivity. On the power side of things the GTX 690 features 2 8pin PCIe power sockets, allowing the card to safely draw up to 375W. The overbuilt power delivery system allows NVIDIA to sell the card as a 300W card while giving it some overclocking headroom for enthusiasts that want to play with the card’s power target. Meanwhile at the front of the card we find the sole SLI connector, which allows for the GTX 690 to be connected to another GTX 690 for quad-SLI.
As for display connectivity NVIDIA is reusing the same port configuration we first saw with the GTX 590. This means 3 DL-DVI ports (2 I-type and 1 D-type) and a mini-DisplayPort for a 4th display. Interestingly NVIDIA is tying the display outputs to both GPUs rather than to a single GPU, and seeing as how NVIDIA still lacks display flexibility on par with AMD, this means that the GTX 690 has display configuration limitations similar to the GTX 590 and GTX 680 SLI. We’ve attached the relevant diagrams below, but in short you can’t use one of the DVI ports for a 4th monitor unless you’re in surround mode. It’s not clear at this time where DisplayPort 1.2’s multiple display capability fits into this, but since the MST hubs are still not available it’s not something that can be used at this time anyhow.
Last but certainly not least we have the luxury aspects of the GTX 690. While the basic design of the GTX 690 resembles the GTX 590, NVIDIA has replaced virtually every bit of plastic with metal for aesthetic/perceptual purposes. The basic shroud is composed of casted aluminum while the fan housing is made out of injection molded magnesium. In fact the only place you’ll find plastics on the shroud is on the polycarbonate windows over the heatsinks, which allows you to see the heatsinks just because.
To be clear the GTX 590 was a solid card and NVIDIA could have just as well used plastic again to no detriment, but the use of metal is definitely a noticeable change. The GTX 690 takes the “solid” concept to a completely different level, and while I have no intention of testing it, you could probably clock someone with the card and cause more damage to them than the GTX 690. Coupled with the return of the LED backlit GeForce logo – this time even larger and in the center of the card – and it’s clear that NVIDIA not only wants buyers to feel like they’ve purchased a solid card, but to be able to show it off in a case with a windowed side panel.
Surprisingly, there’s one place where NVIDIA didn’t put a metal part on the GTX 690 that they did the GTX 590: the back. The GTX 590 shipped with a pair of partial backplates to serve as heatsinks for the RAM on the back of the card, and while the GTX 690 doesn’t have any RAM on its backside thanks to the smaller number of chips required, I’m genuinely surprised NVIDIA didn’t throw in a backplate for the same reason as the metal shroud – just because. Backplates are the scourge of video cards when it comes to placing them directly next to each other because of the space occupied, but with the GTX 690 you need at least 1 free slot anyhow, so this is one of the few times where a backplate wouldn’t get in the way.
With the GTX 590 NVIDIA found themselves with a bit of a PR problem. Hardcore overclockers had managed to send their GTX 590s to a flaming death, which made the GTX 590 look bad and required that NVIDIA lock down all voltage control so that no one else could repeat the feat. The GTX 590 was a solid card at stock, but NVIDIA never designed it for overvolting, and indeed I’m not sure you could even say it was designed for overclocking since it was already running at a 365W TDP.
Since that incident NVIDIA has taken a much harder stance on overvolting, which we first saw with the GTX 680. The reference GTX 680 could not be overvolted, with voltage options limited to whatever voltage the top GPU boost bin used (typically 1.175v). This principle will be continuing with the GTX 690; there will not be any overvolting options.
However this is not to say that the GTX 690 isn’t built for overclocking. The GTX 680 still has some overclocking potential thanks to some purposeful use of design headroom, and the GTX 690 is going to be the same story. In fact it’s much the same story as with AMD’s Radeon HD 5970 and 6990, both of which shipped in configurations that kept power consumption at standard levels while also offering modes that unlocked overclocking potential in exchange for greater power consumption (e.g. AWSUM). As we’ve previously mentioned the GTX 690 is designed to be able to handle up to 375W even though it ships in a 300W configuration, and that 75W is our overclocking headroom.
NVIDIA will be exposing the GTX 690’s overclocking options through a combination of power targets and clock offsets, just as with the GTX 680. This in turn means that the GTX 690 effectively has two overclocking modes:
- Power target overclocking. By just raising the power target (max +35%) you can increase how often the GTX 690 can boost and how frequently it can hit its max boost bin. By adjusting the power target performance will only increase in games/applications that are being held back by NVIDIA’s power limiter, but in return this is easy mode overclocking as all of the GPU boost bins are already qualified for stability. In other words, this is the GTX 690’s higher performance, higher power 375W mode.
- Power target + offset overclocking. By using clock offsets it’s possible to further raise the performance of the GTX 690, and to do so across all games and applications. The lack of overvolting support means that there isn’t a ton of headroom for the offset, but as it stands NVIDIA’s clocks are conservative for power purposes and Kepler is clearly capable of more than 915MHz/1019MHz. This of course will require testing for stability, and it should be noted that because NVIDIA’s GPU boost bins already go so high over the base clock that it won’t take much to be boosting into 1.2GHz+.
NVIDIA’s goal with the GTX 690 was not just to reach GTX 680 SLI performance, but also match the GTX 680’s overclocking capabilities. We’ll get to our full results in our overclocking performance section, but for the time being we’ll leave it at this: we hit 1040MHz base, 1183MHz boost, and 7GHz memory on our GTX 690; even without overvolting it’s a capable overclocker.
GeForce Experience & The Test
Before jumping into our test results, there’s one last thing we wanted to touch upon quickly. Along with announcing the GTX 690 at the NVIDIA Gaming Festival 2012, NVIDIA also used the occasion to announce a new software utility called GeForce Experience.
For some time now NVIDIA has offered a feature they call Optimal Playable Settings through GeForce.com, which are a series of game setting configurations that NVIDIA has tested and is recommending for various GeForce video cards. It’s a genuinely useful service, but it’s also not well known and only covers desktop GPUs.
With GeForce Experience NVIDIA is going to be taking that concept one step further and offering an application that interfaces with both the game and the successor to NVIDIA’s OPS service. The key difference being that rather than having the settings on a website and requiring the user to punch in those settings by hand, GeForce Experience can fetch those settings from NVIDIA and make the settings changes on its own. This would make the process much more accessible, as not only do users not need to know anything about how to access their settings or what they do, but the moment NVIDIA includes this with their drivers it will be far more widespread than OPS ever was.
The other change is that NVIDIA is going to be moving away from manual testing in favor of automated testing. OPS are generated by hand, whereas GeForce Experience settings are going to be based on automated testing, allowing NVIDIA to cover a wider range of games and video cards, most importantly by including mobile video cards. NVIDIA already has GPU farms for driver regression testing, so this is a logical extension of that concept to use those farms to generate and test game settings.
GeForce Experience will be launching in beta form on June 6th.
The press drivers for the GTX 690 are 301.33, though it sounds like NVIDIA will actually launch with a slightly newer version today. As the GTX 690 is launching so soon after the GTX 680 these drivers are virtually identical to the GTX 680 launch drivers. Meanwhile for the GeForce 500 series we’re using 301.24, and for the AMD Radeon cards Catalyst 12.4
We’d also like to give a shout-out to Asus, who sent us one of their wonderful PA246Q 24” P-IPS monitors to allow us to complete our monitor set for multi-monitor testing. From here on we’ll be able to offer multi-monitor results for our high-end cards, and a number of cards have already had that data added in Bench.
Next, based on an informal poll on our forums we’re going to be continuing our existing SLI/CF testing methodology. All of our test results will be with both cards directly next to each other as opposed to spaced apart in order to test the worst case scenario. Users with such a configuration are a minority based on our data, but there are still enough of them that we believe it should be covered.
Finally, we’d like to note that since we don’t have a matching pair of 7970 reference cards, we’re using our one reference card along with XFX’s R7970 BEDD. For gaming performance, power consumption, and temperatures this doesn’t have a material impact, but it means we don’t have meaningful noise performance for the 7970.
|CPU:||Intel Core i7-3960X @ 4.3GHz|
|Motherboard:||EVGA X79 SLI|
|Chipset Drivers:||Intel 126.96.36.1992|
|Power Supply:||Antec True Power Quattro 1200|
|Hard Disk:||Samsung 470 (256GB)|
|Memory:||G.Skill Ripjaws DDR3-1867 4 x 4GB (8-10-9-26)|
|Case:||Thermaltake Spedo Advance|
AMD Radeon HD 7970
AMD Radeon HD 6990
AMD Radeon HD 6970
AMD Radeon HD 5970
NVIDIA GeForce GTX 690
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 590
NVIDIA GeForce GTX 580
NVIDIA ForceWare 301.24
NVIDIA ForceWare 301.33
AMD Catalyst 12.4
|OS:||Windows 7 Ultimate 64-bit|
Kicking things off as always is Crysis: Warhead. It’s no longer the toughest game in our benchmark suite, but it’s still a technically complex game that has proven to be a very consistent benchmark. Thus even four years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and the answer when it comes to setups using a pair of high-end 28nm GPUs is “you better damn well believe it.”
Crysis was a game that Kepler didn’t improve upon by a great deal compared to the Fermi based GTX 580. NVIDIA sees some good SLI scaling here, but AMD’s performance lead with a single GPU translates into an equally impressive lead with multiple GPUs; in spite of all of its capabilities the GTX 690 trails the 7970CF by 18% here. So long as AMD gets good Crossfire scaling here, there’s just no opening for Kepler to win, allowing AMD to handily trounce the GTX 690 here.
As for the intra-NVIDIA comparisons, the GTX 690 does well for itself here. Performance relative to the GTX 680 SLI at 2560 is 98%, which represents a 77% lead over the GTX 680. Overall performance is quite solid; at 55.7fps we’re nearly to 60fps on Enthusiast quality at 2560 with 4x MSAA, which is the holy grail for a video card. Even 5760 is over 60fps, albeit at lower quality settings and without AA.
It’s taken nearly 4 years, but we’re almost there; Crysis at maximum on a single video card.
Our minimum framerates are much the same story for NVIDIA. The GTX 690 once again just trails the GTX 680 SLI, while interestingly enough the dual-GPU NVIDIA solutions manage to erode AMD’s lead at a single point: 2560. Here they only trail by 8%, versus 20%+ at 5760 and 1920. Though at 1920 we also see another interesting outcome: the GTX 580 SLI beats the GTX 680 SLI and GTX 690 in minimum framerates. This would further support our theory that the GTX 680 is memory bandwidth starved in Crysis, especially at the lowest performance points.
Paired with Crysis as our second behemoth FPS is Metro: 2033. Metro gives up Crysis’ lush tropics and frozen wastelands for an underground experience, but even underground it can be quite brutal on GPUs, which is why it’s also our new benchmark of choice for looking at power/temperature/noise during a game. If its sequel due this year is anywhere near as GPU intensive then a single GPU may not be enough to run the game with every quality feature turned up.
Metro was another game that the GTX 680 had trouble with, leading to it trailing the 7970 by the slightest bit. With multiple GPUs thrown into the mix that slight gap has significantly widened, leading to the GTX 690 once again trailing the 7970CF, particularly at 2560 and 5760. In this case the GTX 690 is only hitting 82% of the 7970CF’s performance at 5760, and 84% at 2560. It’s only at 1920 (and 100fps) that the GTX 690 can catch up. So much like the GTX 680, NVIDIA’s not necessarily off to a great start here compared to AMD.
Meanwhile GTX 690 performance relative to the GTX 680 SLI once again looks good here, although not quite as great as with Crysis. At 5760 the GTX 690 achieves 96% of the performance, and at 2560 97% of the performance. So far the GTX 690 is more or less living up to NVIDIA’s claims of being two 680s on a single card.
For racing games our racer of choice continues to be DiRT, which is now in its 3rd iteration. Codemasters uses the same EGO engine between its DiRT, F1, and GRID series, so the performance of EGO has been relevant for a number of racing games over the years.
Interestingly enough it looks like the GTX 690 has met its match on of all things DiRT 3. At 5760 the GTX 690 finally falls behind the GTX 680 SLI to a noticeable degree, trailing NVIDIA’s dual video card setup by 8%. We’re not entirely sure what’s going on that keeps the GTX 690 from boosting so much here, but whatever it is takes its toll at these high resolutions. Meanwhile at 2560 things look much better for the GTX 690 with it once again trailing the GTX 680 SLI by only 3%.
Meanwhile compared to AMD’s 7970CF, NVIDIA still has some trouble pulling away. At 5760 the GTX 690 and 7970CF are virtually tied; it takes a drop to 2560 for the GTX 690 to open up and take any kind of real lead.
Our minimum framerates in DiRT 3 reflect our averages, which in turn reflects the fact that DiRT 3 has rather consistent performance.
Total War: Shogun 2
Total War: Shogun 2 is the latest installment of the long-running Total War series of turn based strategy games, and alongside Civilization V is notable for just how many units it can put on a screen at once. As it also turns out, it’s the single most punishing game in our benchmark suite (on higher end hardware at least).
Unfortunately for NVIDIA they’re in a bit of a bind with Shogun 2. The March 22nd patch effectively broke Kepler performance under certain situations. GTX 680 performance has been nearly halved due to that patch; what was 33fps at 2560 is now 18fps. As near as we can tell this is a problem with the game – Shogun 2 wouldn’t know what the GTX 680 is and suddenly doesn’t know what to do with it – but the end result is that GTX 680/690 users are in trouble here.
As it stands 2560 is utterly broken – perhaps due to MSAA or shadowing – but our results at 5760 and 1920, which use the Very High profile, have not dropped due to the patch and as aresult we have at least some confidence in them. To that end while NVIDIA’s multi-GPU scaling is quite good at 5760 at 95%, not only does AMD have a lead over the GTX 680 with a single 7970, but Crossfire scaling is nearly perfect. Furthermore the GTX 690 once more has trouble boosting here, leading to a 7% deficit compared to the GTX 680 SLI. As a result the GTX 690 only reaches 85% of the performance of the 7970CF here. Things look better at 1920, but besides being a rather low resolution for the GTX 690 we’re also CPU limited by that point.
Batman: Arkham City
Batman: Arkham City is loosely based on Unreal Engine 3, while the DirectX 11 functionality was apparently developed in-house. With the addition of these features Batman is far more a GPU demanding game than its predecessor was, particularly with tessellation cranked up to high.
In single-GPU scenarios the GTX 680 and 7970 are almost tied here, but in multi-GPU scenarios that’s an entirely different story. AMD’s Crossfire implementation here is completely busted; performance gains are minimal, and performance losses are possible. Meanwhile NVIDIA is seeing 85%+ scaling, which shoots the GTX 690 well ahead of the 7970 and ensures that the GTX 690 is getting a fluid framerate at 5760. AMD simply isn’t competitive here at this time.
Meanwhile intra-NVIDIA competition is once again strong. The GTX 690 hits between 96% and 97% of the performance of the GTX 680 SLI, indicating that GPU boost is managing to boost to some pretty high clocks here.
Portal 2 continues the long and proud tradition of Valve’s in-house Source engine. While Source continues to be a DX9 engine, Valve has continued to upgrade it over the years to improve its quality, and combined with their choice of style you’d have a hard time telling it’s over 7 years old at this point. Consequently Portal 2’s performance does get rather high on high-end cards, but we have ways of fixing that…
We’re just going to jump straight into SSAA here, as even the 7970CF can get 61fps at 5760 with it. This is the first game that the GTX 690 solidly beats the 7970CF for reasons other than bugs; at 5760 with impeccable image quality the GTX 690 is ahead of the 7970CF by 38%, and even at 2560 that’s a 27% lead. In fact you could nearly play Portal 2 with SSAA and with 3D Vision at 2560, at 60fps, if there was even a 3DV monitor at that resolution.
With that said, SSAA really clobbers the GTX 690. This is the worst performance we’ll see relative to the GTX 680 SLI, as the GTX 690 only reaches 93% of the GTX 680 SLI’s performance at 5760, and 90% at 2560. Thankfully for the GTX 690 it’s the difference between 120fps, and more than 120fps.
Its popularity aside, Battlefield 3 may be the most interesting game in our benchmark suite for a single reason: it’s the first AAA DX10+ game. It’s been 5 years since the launch of the first DX10 GPUs, and 3 whole process node shrinks later we’re finally to the point where games are using DX10’s functionality as a baseline rather than an addition. Not surprisingly BF3 is one of the best looking games in our suite, but as with past Battlefield games that beauty comes with a high performance cost.
Battlefield 3 has been NVIDIA’s crown jewel; a widely played multiplayer game with a clear lead for NVIDIA hardware. And with multi-GPU thrown into the picture that doesn’t change, leading to the GTX 690 once again taking a very clear lead here over the 7970CF at all resolutions. With that said, we see something very interesting at 5760, with NVIDIA’s lead shrinking by quite a bit. What was a 21% lead at 2560 is only a 10% at 5760. So far we haven’t seen any strong evidence of NVIDIA being VRAM limited with only 2GB of VRAM and while this isn’t strong evidence that the situation has changed is does warrant consideration. If anything is going to be VRAM limited after all it’s BF3.
Meanwhile compared to the GTX 680 SLI the GTX 690 is doing okay here. It’s only achieving 93% of the GTX 680 SLI’s performance at 2560, but for some reason pulls ahead at 5760, covering that to 96% of the performance of the dual video card setup.
Our next game is Starcraft II, Blizzard’s 2010 RTS megahit. Much like Portal 2 it’s a DX9 game designed to run on a wide range of hardware so performance is quite peppy with most high-end cards, but it can still challenge a GPU when it needs to.
We mainly throw in SC2 here for consistency. Since it doesn’t support the wide aspect ratios necessary for multi-monitor setups it’s not very challenging for our dual-GPU setups. The 7970CF ends up being the loser here at only 131.9fps at 2560, while the GTX 600 series setups are at 148fps. Though it’s interesting to note that the GTX 690 does very well here compared to the GTX 680 SLI, with the GTX 690 reaching 100% of the performance of the GTX 680 SLI.
The Elder Scrolls V: Skyrim
Bethesda's epic sword & magic game The Elder Scrolls V: Skyrim is our RPG of choice for benchmarking. It's altogether a good CPU benchmark thanks to its complex scripting and AI, but it also can end up pushing a large number of fairly complex models and effects at once, especially with the addition of the high resolution texture pack.
Unfortunately for AMD, Crossfire isn’t just broken with Batman, as it’s also broken here. If they could scale they’d be CPU limited like the GTX 600 series, but instead we’re getting negative performance. Furthermore at 2560 that’s a very choppy 58.8fps for the 7970CF.
In any case, even though we’re CPU limited it’s interesting to see that the GTX 690 still can’t quite catch the GTX 680 SLI. At 2560 it trails by 2%, which is on the edge of experimental variability.
Our final game, Civilization 5, gives us an interesting look at things that other RTSes cannot match, with a much weaker focus on shading in the game world, and a much greater focus on creating the geometry needed to bring such a world to life. In doing so it uses a slew of DirectX 11 technologies, including tessellation for said geometry, driver command lists for reducing CPU overhead, and compute shaders for on-the-fly texture decompression.
As with Total War: Shogun 2 we’re reaching the point where we’re CPU limited. Even at 2560 we’re clearly capping out at around 95fps, and at 1920 our results just get outright weird to the point where we may be seeing the first and only evidence of the overhead from NVIDIA’s move to static scheduling on Kepler.
In any case, even though we’re approaching a CPU bottleneck the GTX 690 still does well enough for itself here versus both AMD and NVIDIA. We’re looking at a 5% lead at 2560 over the 7970CF, while performance reaches 99% of the GTX 680 SLI.
For our look at compute performance this is going to be a brief look. Our OpenGL AES and DirectCompute Fluid Simulation benchmarks simply don’t scale with multiple GPUs, so we’ll skip though (though the data is still available in Bench).
Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.
Given the nature of the benchmark, it’s not surprising that we see a performance regression here with some setups. The nature of this benchmark is that it doesn’t split across multiple GPUs well, though that doesn’t stop AMD and NVIDIA from tying. This doesn’t impact real game performance as we’ve seen, but it’s a good reminder of the potential pitfalls of multi-GPU configurations. Though AMD does deserve some credit here for gaining on their single GPU performance, pushing their lead even higher.
Our other compute benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.
Unlike the Civ V compute benchmark, SLG scales very well with multiple GPUs, nearly doubling in performance. Unfortunately for NVIDIA GK104 shows its colors here as a compute-weak GPU, and even with two of them we’re nowhere close to one 7970, let alone the monster that is two. If you’re looking at doing serious GPGPU compute work, you should be looking at Fermi, Tahiti, or the future Big Kepler.
Power, Temperature, & Noise
As always, we’re wrapping up our look at a video card’s stock performance with a look at power, temperature, and noise. More so than even single GPU cards, this is perhaps the most important set of metrics for a multi-GPU card. Poor cooling that results in high temperatures or ridiculous levels of noise can quickly sink a multi-GPU card’s chances. Ultimately with a fixed power budget of 300W or 375W, the name of the game is dissipating that heat as quietly as you can without endangering the GPUs.
|GeForce GTX 600 Series Voltages|
|Ref GTX 690 Boost Load||Ref GTX 680 Boost Load||Ref GTX 690 Idle|
It’s interesting to note that the GPU voltages on GTX 680 and GTX 690 are identical; both idle at the 0.987v, and both max out at 1.175v for the top boost bin. It would appear that NVIDIA’s binning process for the GTX 690 is looking almost exclusively at leakage; they don’t need to find chips that operate at a lower voltage, they merely need chips that don’t waste too much power.
NVIDIA has progressively brought down their idle power consumption and it shows. Where the GTX 590 would draw 155W at the wall at idle, we’re drawing 130W with the GTX 690. For a single GPU NVIDIA’s idle power consumption is every bit as good as AMD’s, however they don’t have any way of shutting off the 2nd GPU like AMD does, meaning that the GTX 690 still draws more power at idle than the 7970CF. Being able to shut off that 2nd GPU really mitigates one of the few remaining disadvantages of a dual-GPU card, and it’s a shame NVIDIA doesn’t have something like this.
Long idle power consumption merely amplifies this difference. Now NVIDIA is running 2 GPUs while AMD is running 0, which means the GTX 690 is leading to us pulling 19W more at the wall while doing absolutely nothing.
Thanks to NVIDIA’s binning, the load power consumption of the GTX 690 looks very good here. Under Metro we’re drawing 63W less at the wall compared to the GTX 680 SLI, even though we’ve already established that performance is within 5%. The gap with the 7970CF is even larger; the 7970CF may have a performance advantage, but it comes at a cost of 175W more at the wall.
OCCT power is much the same story. Here we’re drawing 429W at the wall, an incredible 87W less than the GTX 680 SLI. In fact a GTX 690 draws less power than a single GTX 580. That is perhaps the single most impressive statistic you’ll see today. Meanwhile compared to the 7970CF the difference at the wall is 209W. The true strength of multi-GPU cards is their power consumption relative to multiple cards, and thanks to NVIDIA’s ability to get the GTX 690 so very close to the GTX 680 SLI the GTX 690 is absolutely sublime here.
Moving on to temperatures, how well does the GTX 690 do? Quite well. Like all dual-GPU cards GPU temperatures aren’t as good as with single-GPU cards, but it’s also no worse than any dual-GPU setup. In fact of all the dual-GPU cards in our benchmark selection this is the coolest, beating even the GTX 590. Kepler’s low power consumption really pays off here.
For load temperatures we’re going to split things up a bit. While our official testing protocol is to test with our video cards directly next to each other when doing multi-card configurations, we’ve gone ahead and tested the GTX 680 SLI both in an adjacent and spaced configuration, with the spaced configuration marked with a *.
When it comes to load temperatures the GTX 690 once again does well for itself. Under Metro it’s warmer than most single GPU cards, but only barely so. The difference from a GTX 680 is only 3C, 1C with a spaced GTX 680 SLI, and it’s 4C cooler than an adjacent GTX 680 SLI setup. More importantly perhaps is that Metro temperatures are 6C cooler than on the GTX 590.
As for OCCT, the numbers are different but the story is the same. The GTX 690 is 3C warmer than the GTX 680, 1C warmer than a spaced GTX 680 SLI, and 4C cooler than an adjacent GTX 680 SLI. Meanwhile temperatures are now 8C cooler than the GTX 590 and even 6C cooler than the GTX 580.
So the GTX 680 does well with power consumption and temperatures, but is there a noise tradeoff? At idle the answer is no; at 40.9dB it’s effectively as quiet as the GTX 680 and incredibly enough over 6dB quieter than the GTX 590. NVIDA’s progress at idle continues to impress, even if they can’t shut off the second GPU.
When NVIDIA was briefing us on the GTX 690 they said that the card would be notably quieter than even a GTX 680 SLI, which is quite the claim given how quiet the GTX 680 SLI really is. So out of all the tests we have run, this is perhaps the result we’ve been the most eager to get to. The results are simply amazing. The GTX 690 is quieter than a GTX 680 SLI alright; it’s quieter than a GTX 680 SLI whether the cards are adjacent or spaced. The difference with spaced cards is only 0.5dB under Metro, but it’s still a difference. Meanwhile with that 55.1dB noise level the GTX 690 is doing well against a number of other cards here, effectively tying the 7970 and beating out every other multi-GPU configuration on the board.
OCCT is even more impressive, thanks to a combination of design and the fact that NVIDIA’s power target system effectively serves as a throttle for OCCT. 55.8dB is not only just a hair louder than under Metro, but it’s still a hair quieter than a spaced GTX 680 SLI setup. It’s also quieter than a 7970, a GTX 580, and every other multi-GPU configuration we’ve tested. The only thing it’s not quieter than is the GTX 680 and the 6970.
With all things considered the GTX 690 is not that much quieter than the GTX 590 under gaming loads, but NVIDIA has improved performance just enough that they can beat their own single-GPU cards in SLI. And at the same time the GTX 690 consumes significantly less power for what amounts to a temperature tradeoff of only a couple of degrees. The fact that the GTX 690 can’t quite reach the GTX 680 SLI’s performance may have been disappointing thus far, but after looking at our power, temperature, and noise data it’s a massive improvement on the GTX 680 SLI for what amounts to a very small gaming performance difference.
Overclocked: Power, Temperature, & Noise
Our final task is our look at GTX 690’s overclocking capabilities. NVIDIA has told us that with GTX 690 they weren’t just looking to duplicate GTX 680 SLI’s performance, but also its overclocking capabilities. This is quite the lofty goal, since with GTX 690 NVIDIA is effectively packing 2 680s into the same amount of space, leaving far less space for VRM circuitry and trace routing.
|GeForce 600 Series Overclocking|
|GTX 690||GTX 680|
|Shipping Core Clock||915MHz||1006MHz|
|Shipping Max Boost Clock||1058MHz||1110MHz|
|Shipping Memory Clock||6GHz||6GHz|
|Shipping Max Boost Voltage||1.175v||1.175v|
|Overclock Core Clock||1040MHz||1106MHz|
|Overclock Max Boost Clock||1183MHz||1210MHz|
|Overclock Memory Clock||7GHz||6.5GHz|
|Overclock Max Boost Voltage||1.175v||1.175v|
In practice NVIDIA has not quite kept up with GTX 680, and in other ways completely exceeded it. When it comes to the core clock we didn’t quite reach parity with our reference GTX 680; the GTX 680’s highest boost clock bin could hit 1210MHz, while the GTX 690’s highest boost clock bin topped out at 1183MHz, some 27MHz (2%) slower.
On the other hand, our memory overclock is so high as to be within the “this doesn’t seem physically possible” range. As we have discussed time and time again, GDDR5 memory busses are difficult to run at high clocks on a good day, never mind a bad day. With GF110 NVIDIA couldn’t get too far past 4GHz, and even with GTX 680 NVIDIA was only shipping at 6GHz.
It would appear that no one has told NVIDIA’s engineers that 7GHz is supposed to be impossible, and as a result they’ve gone and done the unthinkable. Some of this is certainly down to the luck of the draw, but it doesn’t change the fact that our GTX 690 passed every last stability test we could throw at it at 7GHz. And what makes this particularly interesting is the difference between the GTX 680 and the GTX 690 – both are equipped with 6GHz GDDR5 RAM, but while the GTX 680 is equipped with Hynix the GTX 690 is equipped with Samsung. Perhaps the key to all of this is the Samsung RAM?
In any case, our final result was a +125MHz core clock offset and a +1000MHz memory clock offset, which translates into a base clock of 1040MHz, a max boost clock of 1183MHz, and a memory clock of 7GHz. This represents a 12%-14% core overclock and a 17% memory overclock, which is going to be enough to put quite the pep in the GTX 690’s step.
As always we’re going to start our look at overclocking in reverse, beginning with power, temperature, and noise. For the purpose of our testing we’ve tested our GTX 690 at two different settings: at stock clocks with the power target set to 135% (GTX 690 PT), and with our custom overclock alongside the same 135% power target (GTX 690 OC). This allows us to look at both full overclocking and the safer option of merely maxing out the boost clocks for all they’re worth.
As expected, merely increasing the power target to 135% was enough to increase the GTX 690’s power consumption, though overclocking further adds to that. Even with the power target increase however, the power consumption at the wall for the GTX 690 is still lower than the GTX 680 SLI by over 20W, which is quite impressive. As we’ll see in our section on performance this is more than enough to erase the GTX 690’s performance gap, meaning at this point its still consuming less power than the GTX 680 SLI while offering better performance than its dual-card cousin.
It’s only after outright overclocking that we finally see power consumption equalize with the GTX 680 SLI. The overclocked GTX 690 is within 10W of the GTX 680 SLI, though as we’ll see the performance is notably higher.
What does playing with clocks and the power target do to temperatures? The impact isn’t particularly bad, though we’re definitely reaching the highest temperatures we really want to hit. For the GTX 690 PT things are actually quite good under Metro, with the temperature not budging an inch even with the higher power consumption. Under OCCT however temperatures have risen 5C to 87C. Meanwhile the GTX 690 OC reaches 84C under Metro and a toasty 89C under Metro. These should be safe temperatures, but I would not want to cross 90C for any extended period of time.
Finally we have load noise. Unsurprisingly, because load temperatures did not go up for the GTX 690 PT under Metro load noise has not gone up either. On the other hand load noise under OCCT has gone up 3.5dB, making the GTX 690 PT just as loud as our GTX 680 SLI in its adjacent configuration. In practice the noise impact from raising the power target is going trend closer to Metro than OCCT, but Metro is likely an overly optimistic scenario; there’s going to be at least a small increase in noise here.
The GTX 690 OC meanwhile approaches the noise level of the GTX 680 SLI under Metro, and shoots past it under OCCT. Considering the performance payoff some users will no doubt find this worth the noise, but it should be clear that overclocking like this means sacrificing the stock GTX 690’s quietness.
Overclocked: Gaming Performance
When it comes to overclocking we're effectively looking at two different scenarios. Merely raising the power target is enough to erase the GTX 680 SLI's small lead in virtually all games, and in most games it puts the GTX 690 ahead by an equally small degree. On the other hand with full overclocking the GTX 690 can easily pass the GTX 680 SLI and close the gap on the 7970CF in games where AMD has the lead.
Traditionally dual-GPU cards have been a mixed bag. More often than not they have to sacrifice a significant amount of single-GPU performance in order to put two GPUs on a single card, and in the rare occasions where that tradeoff doesn’t happen there’s some other tradeoff such as a loud cooler or immense power consumption. NVIDIA told us that they could break this tradition and put two full GTX 680s on a single card, and that they could do that while making it quieter and less power consuming than a dual video card SLI setup. After going through our benchmarking process we can safely say that NVIDIA has met their goals.
From a gaming performance perspective we haven’t seen a dual-GPU card reach the performance of a pair of high-end cards in SLI/CF since the Radeon HD 4870X2 in 2008, so it’s quite refreshing to see someone get so close again 4 years later. The GTX 690 doesn’t quite reach the performance of the GTX 680 SLI, but it’s very, very close. Based on our benchmarks we’re looking at 95% of the performance of the GTX 680 SLI at 5760x1200 and 96% of the performance at 2560x1600. These are measurable differences, but only just. For all practical purposes the GTX 690 is a single card GTX 680 SLI – a single card GTX 680 SLI that consumes noticeably less power under load and is at least marginally quieter too.
With that said, this would typically be the part of the review where we would inject a well-placed recap of the potential downsides of multi-GPU technology; but in this case there’s really no need. Unlike the GTX 590 and unlike the GTX 295 NVIDIA is not making a performance tradeoff here compared to their single-GPU flagship card. When SLI works the GTX 690 is the fastest card out there, and when SLI doesn’t work the GTX 690 is still the fastest card out there. For the first time in a long time using a dual-GPU card doesn’t mean sacrificing single-GPU performance, and that’s a game changer.
At this point in time NVIDIA offers two different but compelling solutions for ultra-enthusiast performance; the GTX 690 and GTX 680 SLI, and they complement each other well. For most situations the GTX 690 is going to be the way to go thanks to its lower power consumption and lower noise levels, but for cases that need fully exhausting video cards the GTX 680 SLI can offer the same gaming performance at the same price. Unfortunately we’re going to have to put AMD out of the running here; as we’ve seen in games like Crysis and Metro the 7970 in Crossfire has a great deal of potential, but as it stands Crossfire is simply too broken overall to recommend.
The only real question I suppose is simply this: is the GTX 690 worthy of its $999 price tag? I don’t believe there’s any argument to be had with respect to whether the GTX 690 is worth getting over the GTX 680 SLI, as we’ve clearly answered that above. As a $999 card it doesn’t double the performance of the $499 GTX 680, but SLI has never offered quite that much of a performance boost. However at the same time SLI has almost always been good enough to justify the cost of another GPU if you must have performance better than what the fastest single GPU can provide, and this is one of those times.
Is $999 expensive? Absolutely. Is it worth it? If you’re gaming at 2560x1600 or 5760x1200, the GTX 690 is at least worth the consideration. You can certainly get by on less, but if you want 60fps or better and you want it with the same kind of ultra high quality single GPU cards can already deliver at 1920x1080, then you can’t do any better than the GTX 690.
Wrapping things up, there is one question left I feel like we still don’t have a good answer to: how much RAM a $999 card should have. NVIDIA went with a true equal for the GTX 680 SLI, right down to the 2GB of VRAM per GPU. Looking back at what happened to the Radeon HD 5970 and its 1GB of VRAM per GPU – we can’t even run our 5760x1200 benchmarks on it, let alone a couple of 2560x1600 benchmarks – I’m left uneasy. None of our benchmarks today seem to require more than 2GB of VRAM, but that much VRAM has been common in high-end cards since late 2010; the day will come when 2GB isn’t enough, and I'm left to wonder when. A GTX 690 with 4GB of VRAM per GPU would be practically future-proof, but with 2GB of VRAM NVIDIA is going to be cutting it close.