Original Link: http://www.anandtech.com/show/4239/nvidias-geforce-gtx-590-duking-it-out-for-the-single-card-king
NVIDIA’s GeForce GTX 590: Duking It Out For The Single Card Kingby Ryan Smith on March 24, 2011 9:00 AM EST
It really doesn’t seem like it’s been all that long, but it’s been nearly a year and a half since NVIDIA has had a dual-GPU card on the market. The GeForce GTX 295 was launched in January of 2009, the first card based on the 55nm die shrink of the GT200 GPU. For most of the year the GTX 295 enjoyed bragging rights as the world’s fastest video card; however the launch of the Radeon HD 5000 series late in 2009 effectively put an end to the GTX 295’s run as a competitor.
Even with the launch of the GTX 400 series in March of 2010, a new dual-GPU card from NVIDIA remained the stuff of rumors—a number of rumors claimed we’d see a card based on GF10X, but nothing ever materialized. Without a dual-GPU card, NVIDIA had to settle for having the fastest single-GPU card on the market through the GTX 480, a market position worth bragging about, but one that was always shadowed by AMD’s dual-GPU Radeon HD 5970. Why we never saw a dual-GPU GTX 400 series card we’ll never know—historically NVIDIA has not released a dual-GPU card for every generation—but it’s a reasonable assumption that GF100’s high leakage made such a part unviable.
But at long last the time has come for a new NVIDIA dual-GPU card. GF100’s refined follow up, GF110, put the kibosh on leakage and allowed NVIDIA to crank up clocks and reduce power consumption throughout their GTX 500 lineup. This also seems to have been the key to making a dual-GPU card possible, as NVIDIA has finally unveiled their new flagship card: GeForce GTX 590. Launching a mere two weeks after AMD’s latest flagship card, the Radeon HD 6990, NVIDIA is gunning for their spot at the top back. But will they reach their goal? Let’s find out.
|GTX 590||GTX 580||GTX 570||GTX 560 Ti|
|Stream Processors||2 x 512||512||480||384|
|Texture Address / Filtering||2 x 64/64||64/64||60/60||64/64|
|ROPs||2 x 48||48||40||32|
|Memory Clock||853MHz (3414MHz data rate) GDDR5||1002MHz (4008MHz data rate) GDDR5||950MHz (3800MHz data rate) GDDR5||1002Mhz (4008MHz data rate) GDDR5|
|Memory Bus Width||2 x 384-bit||384-bit||320-bit||256-bit|
|VRAM||2 x 1.5GB||1.5GB||1.25GB||1GB|
|FP64||1/8 FP32||1/8 FP32||1/8 FP32||1/12 FP32|
|Transistor Count||2 x 3B||3B||3B||1.95B|
|Manufacturing Process||TSMC 40nm||TSMC 40nm||TSMC 40nm||TSMC 40nm|
Given that this launch takes place only two weeks after the Radeon HD 6990, it’s only natural to make comparisons to AMD’s recently launched dual-GPU card. In fact as we’ll see the cards are similar in a number of ways, which is a bit surprising given that the last time both companies had competing dual-GPU cards, the GTX 295 and Radeon HD 4870X2 were quite different in design.
But before we get too far, let’s start at the top with the specs. As is now customary for dual-GPU cards, NVIDIA has put together two of their top-tier GPUs and turned down the clocks in order to make a power/heat budget. In single card configurations we’ve seen GF110 hit 772MHz for the GTX 580, but that was for a card that can hit 300W load under the right/wrong circumstances. For the GTX 590 the clocks are down to 607MHz, while the functional unit count remains unchanged with everything enabled. Meanwhile memory clocks have also been reduced to the lowest clocks we’ve seen since the GTX 470: 853.5MHz (3414MHz data rate). NVIDIA has never hit very high memory clocks on the GTX 500 series, so it stands to reason that routing two 384-bit busses only makes the job harder.
All told at these clocks comparisons to the GTX 570 are more apt than comparisons to the GTX 580. Even compared to the GTX 570, per-GPU GTX 590 only has 83% the rasterization, 88% of the shading/texturing capacity and 99.5% the ROP capacity. Where the GTX 590 has the edge on the GTX 570 on a per-GPU basis is that with all of GF110’s functional units enabled and a 384-bit memory bus, it has 108% of the memory bandwidth and 120% the L2 cache. As a result while performance should be close to the GTX 570 on a per-GPU basis, it will fluctuate depending on the biggest bottleneck, with shading/texturing being among the worst scenarios, and L2 cache/memory bandwidth being among the best. Consequently, total performance should be close to the GTX 570 SLI.
As was the case with the 6990, NVIDIA is raising the limit on power consumption. The GTX 590 is rated for a TDP of 365W, keeping in mind that NVIDIA’s definition of TDP is the maximum power draw in “real world applications”. The closest metric from AMD would be their “typical gaming power”, for which the 6990 was rated for 350W. As a result the 6990 and GTX 590 should be fairly close in power consumption most of the time. Normally only Furmark and similar programs would generate a significant difference, but as we’ll see the rules have changed starting with NVIDIA’s latest drivers. Meanwhile for the idle TDP NVIDIA does not specify a value, but it should be under 40W.
With performance on paper that should rival the GTX 570 SLI—and by extension the Radeon HD 6990—it shouldn’t come as a big surprise that NVIDIA is pricing the GTX 590 to be competitive with AMD’s card. The MSRP of the GTX 590 will be $699, the same as where the 6990 launched two weeks ago. The card we’re looking at today, the EVGA GeForce GTX 590 Classified, is a premium package that will be a bit higher at $729. EVGA won’t be the only vendor offering a premium GTX 590 package, and while we don’t have a specific breakdown based on vendors, EVGA isn’t the only vendor with a premium package, so expect a range of prices. Ultimately for cards at the $699 MSRP, they will be competing with the 6990, the 6970CF, and the GTX 570 SLI.
As for availability, it’s a $700 card. NVIDIA isn’t expecting any real problems, but these are low-volume cards, so it’s possible and quite likely they’ll go in and out of stock.
|March 2011 Video Card MSRPs|
GeForce GTX 590
|$700||Radeon HD 6990|
|$320||Radeon HD 6970|
|$240||Radeon HD 6950 1GB|
|$190||Radeon HD 6870|
|$160||Radeon HD 6850|
||$110||Radeon HD 5770|
Meet The EVGA GeForce GTX 590 Classified
For the GTX 590 launch, NVIDIA once again sampled partner cards rather than sampling reference cards directly to the press. Even with this, all of the cards launching today are more-or-less reference with a few cosmetic changes, so everything we’re describing here applies to all other GTX 590 cards unless otherwise noted.
With that out of the way, the card we were sampled is the EVGA GeForce GTX 590 Classified, a premium GTX 590 offering from EVGA. The important difference from the reference GTX 590 is that GTX 590 Classified ships at slightly higher clocks—630/864 vs. 607/853.5—and comes with a premium package, which we will get into later. The GTX 590 Classified also commands a premium price of $729.
Jumping right into things, the GTX 590 ends up having far more in common with the recently launched 6990 than it does the original GTX 295. NVIDIA has gone with a single PCB design with GPUs at either end of the card. In the center is an 80mm fan, which blows hot air out either end of the card. As a result the GTX 590 is extremely close to the 6990 in design with only a few critical differences, so it’s those differences we’re going to focus on.
Starting at the PCB level, we’ve already touched on how NVIDIA is placing GPUs at either end of the card. Power is provided through set of 10-phase VRMs; we don’t recognize the VRMs being used here. Meanwhile PCIe bridging is being provided by NVIDIA’s venerable NF200 PCIe bridge, which offers PCIe 2.0 x16 communication between each GPU, with the GPUs sharing the x16 link to the CPU. 24 1Gb GDDR5 chips cover the card, half on each side of the PCB. Our card was outfit with Samsung 5GHz modules, meaning the 384-bit memory bus is going to error out long before the GDDR5 chips themselves reach their limits given the card’s low default clocks.
Moving up a level is the GTX 590’s cooling apparatus. NVIDIA is using a design very similar to the GTX 580, re-oriented for a dual-GPU card. A metal baseplate provides structural rigidity to the card along with serving as a basic heatsink for the VRMs and GDDR5 memory modules. The GPUs themselves are cooled by the same style vapor chamber based aluminum heatsinks we saw on the GTX 580. Unlike AMD, NVIDIA isn’t using an exotic phase change thermal compound here, but they are using a high end compound, specifically Shin-Etsu MicroSi X23-7762.
When it comes to cooling the biggest immediate difference from the similarly designed 6990 is going to be the choice of fan. AMD went for a blower, similar to what we seen time and time again on the GTX 400 & 500 series, along with the Radeon HD 5000 & 6000 series. NVIDIA however went with a traditional 80mm fan, which isn’t something we’ve seen on high end cards as of late. In practice the functionality is identical—each blows air out either end of the card. The biggest difference is going to be airflow and noise; by launching 2nd NVIDIA has seen AMD’s hand and knows that the 6990 ended up being rather loud. Consequently one of the promoted features is that NVIDIA is claiming the GTX 590 will be among the quietest dual-GPU cards in recent years—as we’ll see this is definitely a claim that has merit when compared to the 6990.
Finally topping the card is a removable shroud that encloses the card to only allow air immediately out of the front and rear ends of the card. Compared to the 6990 NVIDIA has kept their common recessed design, with the shroud & fan being slightly recessed compared to the thickness of the card elsewhere. Compared once more to the 6990, one quirk is that while AMD left a whole slot’s worth of ventilation available on either side of the card, on the external exhaust side of the card NVIDIA only has what amounts to half a slot of ventilation. The other half of the slot is occupied by a DVI connector, meaning the GTX 590 has a smaller external ventilation opening than even the GTX 580, even though TDP is up. As you might expect this means that more hot air is clearly being exhausted inside the case, making effective case cooling even more critical than it was with the 6990.
Case in point, we once again bring you our hard drive temperature chart, showcasing the impact of exhausting hot air directly toward our test bed’s hard drive cage.
|Seagate 500GB Hard Drive Temperatures|
|GeForce GTX 590||42C|
|Radeon HD 6990||37C|
|Radeon HD 6990OC||40C|
|Radeon HD 5970||31C|
At 37C the 6990 was already warm, and the 6990OC pushed this even higher to 40C. The GTX 590 does even worse in this regard, increasing our drive temperature to 42C, some 15C warmer than a 6970CF setup where all hot air is being channeled outside of the case. So if you’re going to be using a GTX 590, it’s absolutely imperative that you don’t place any kind of drive behind it, it needs a clear shot towards the front intake of the case.
On that note the GTX 590 does have one other design benefit for cooling purposes: length. While the 6990 was 12” long, the GTX 590 is only 11” long. This isn’t necessarily going to make the card any easier to fit given the cooling requirements, however it’s going to make it much easier to fit an exhaust case fan in front of the GTX 590. With the 6990 we mused about turning the closest air intake in to an exhaust, but with the GTX 590 the extra space afforded by the shorter card actually makes that possible.
Moving on, on the back side of the GTX 590 we find two partial backplates. These serve as protection of and a heatsink for the 12 GDDR5 memory modules on the back of the card. This is also one of the only hardware differences between the reference GTX 590 and the EVGA GTX 590 Classified—EVGA uses a single backplate with strategically placed holes instead. It should change cooling performance, but it does make the card a bit easier to grip by further preventing contact with any components on the back of
Ultimately with a 365W TDP NVIDIA’s advice about SLI mirrors AMD’s advice on the 6990 in CrossFire: these cards need a lot of breathing room, a lot of airflow, and a lot of power. One or more empty slots are required between cards in order to give them suitable room to pull in air, meaning only a small number of boards are going to be compatible due to PEG slot spacing. NVIDIA will be posting a Quad SLI certification list on their site; having already seen it the smallest PSU on the list is an 1100W unit and it goes up from there. With good scaling Quad SLI should provide quite a bit of performance, but it will take some specific hardware to get there.
Up next is power and display connectivity. As was the case with the 6990, the GTX 590 is setup to officially draw up to 375W, through the use of two 8pin 150W PCIe power sockets along with a further 75W from the PCIe slot. The card should be safely capable of pulling more power than this, but unlike AMD NVIDIA isn’t making any kind of explicit or implicit promise about what the card can handle when overclocking.
As GF110 can only drive two displays, NVIDIA is tying together the display outputs of both GPUs to drive the card’s four display outputs. Unlike AMD who went heavy on DisplayPort and light in DVI, NVIDIA is going heavy on DVI and light on DisplayPort. Three DL-DVI ports adorn the card along with a single mini-DP port, making this the first time we’ve seen DP on an NVIDIA reference design. The port choice here seems to be heavily influenced by NVIDIA’s 3D Vision Surround—most compatible monitors are DL-DVI only, and DP to DL-DVI adapters are quite a bit more expensive than SL-DVI adapters, meaning going the adapter route like AMD would be impractical. As a consequence of this NVIDIA has to place one DVI port on the 2nd slot, taking up half of the card’s external ventilation slot.
Meet The EVGA GeForce GTX 590 Classified, Cont.
Now that we’ve had a chance to discuss the GTX 590 reference hardware, let’s touch on the rest of EVGA’s package. As we previously noted the EVGA GeForce GTX 590 Classified is being positioned as a premium product with a $30 price premium, so let’s see why.
We’ll start with the box—the box EVGA is using is quite simply enormous. EVGA will be selling both single GTX 590s and pairs of GTX 590s using the same box, so the resulting box is big enough to carry two cards. Presumably this isn’t being sold in any retail stores, as a result the box is nearly blank save for the product name on the front.
In order to sell the idea that this is a premium product, EVGA is also packing in some extras with the card. Honestly there’s probably nothing in here that’s going to be of great utility to you except possibly the mouse pad, but clearly EVGA thinks otherwise:
- EVGA Shirt (XL)
- EVGA Poster
- EVGA Branded Gaming Surface (XL Mouse Pad)
- 2x 6pin-to-8pin PCIe power adaptors
- Display adaptors: DVI-to-VGA, DVI-to-HDMI, miniDP-to-DP
- Driver/demo/utility CD
- Non-generic GTX 590 Quick Start Guide
- Redemption offer for 3DMark 2011 Advanced Edition
EVGA is offering up their usual suite of overclocking tools with the GTX 590, however only EVGA Precision is on the disc. EVGA OC Scanner and ELEET can be downloaded from EVGA’s website.
Of the software tools we’ll start with ELEET, as this is the only tool we haven’t covered before. EVGA has had ELEET for some time now for their motherboard business, as it’s their principle motherboard overclocking tool. However at the end of last year they added GPU voltage control through the utility, finally allowing users to overvolt their GPUs using only EVGA tools. We’re glad to see EVGA went with a less-is-more approach with the design of the utility, ditching any funky skinning and focusing on usability. One thing ELEET does that we have not seen on any other utility is that EVGA allows for controlling the voltage for more than just 3D Game clocks; idle and low-performance voltages can also be controlled. We’re assuming the purpose here is to undervolt those modes rather than to overvolt them, as the latter is counterproductive.
At this point ELEET’s only notable weaknesses are related to the fact EVGA implemented voltage control separately from EVGA Precision’s overclocking support. As a result you need to launch ELEET separate from Precision to set any voltages if your overclocked settings require overvolting, and at the moment ELEET does not have any kind of profile support, meaning you have to manually dial in the voltages on every boot. This isn’t utility breaking, but there’s a clear potential for annoyance. EVGA tells us they will be fixing this in the future.
Update: April 2nd, 2011: Starting with the 267.91 drivers and release 270 drivers, NVIDIA has disabled overvolting on the GTX 590 entirely. So while everything we've written about ELEET remains, with the GTX 590 Classified it is effectively rendered obsolete.
The next utility in EVGA’s suite is of course EVGA’s fantastic EVGA Precision overclocking tool. Based on the famous (and now discontinued) RivaTuner, Precision is a custom-skinned and up to date utility based on RivaTuner technology. It features per-GPU overclocking controls, an OSD overlay, and hardware monitoring/logging. Alongside MSI’s Afterburner, we believe it sets the gold standard for GPU overclocking/monitoring utilities.
The final utility in EVGA’s suite is the EVGA Overclock Scanner. In a nutshell, the OC Scanner is a load-generating utility (ala Furmark) which rather than generating a moving image generates a static image. By generating a static image it’s possible for the software to identify any rendering errors in the image that would be indicative of a bad overclock. Or in other words, if you’ve overclocked your card too far, this utility will let you know. With the wider range of overclocking options afforded by ELEET, OC Scanner takes on an additional degree of importance for establishing both stability and safe operating temperatures, nicely rounding out EVGA’s software suite. All told the suite should cover 99.9% of most users’ overclocking needs.
Wrapping things up, as is customary for their high-end cards, EVGA is offering a lifetime warranty for the GTX 590, so long as the card is registered within 30 days. Notably overclocking does not void the lifetime warranty (this turned out to cause quite a bit of commotion with the 6990). Altogether, the entire EVGA GeForce GTX 590 Classified package has an MSRP of $729.
OCP Refined, A Word On Marketing, & The Test
As you may recall from our GTX 580 launch article, NVIDIA added a rudimentary OverCurrent Protection (OCP) feature to the GTX 500 series. At the time of the GTX 580 launch, OCP would clamp down on Furmark and OCCT keep those programs from performing at full speed, as the load they generated was so high that it risked damaging the card. As a matter of principle we have been disabling OCP in all of our tests up until now as OCP was only targeting Furmark and OCCT, meaning it didn’t provide any real protection for the card in any other situations. Our advice to NVIDIA at the time was to expand it to cover the hardware at a generic level, similar to how AMD’s PowerTune operates.
We’re glad to report that NVIDIA has taken up at least some of our advice, and that OCP is taking its first step forward since then. Starting with the ForceWare 267 series drivers, NVIDIA is now using OCP at all times, meaning OCP now protects against any possible program that would generate an excessive load (as defined by NVIDIA), and not just Furmark and OCCT. At this time there’s definitely still a driver component involved as NVIDIA still throttles Furmark and OCCT right off the bat, but everything else seems to be covered by their generic detection methods.
At this point our biggest complaint is that OCP’s operation is still not transparent to the end user. If you trigger it you have no way of knowing unless you know how the game/application should already be performing. NVIDIA tells us that at some point this will be exposed through NVIDIA’s driver API, but today is not that day. Along those lines, at least in the case of Furmark and OCCT OCP still throttles to an excessive degree—whereas AMD gets this right and caps anything and everything at the PowerTune limit, we still see OCP heavily clamp these programs to the point that our GTX 590 draws 100W more under games than it does under Furmark. Clamping down on a program to bring power consumption down to safe levels is a good idea, but clamping down beyond that just hurts the user and we hope to see NVIDIA change this.
Finally, the expansion of OCP’s capabilities is going to have an impact on overclocking. As with reporting when OCP is active, NVIDIA isn’t being fully transparent here so there’s a bit of feeling around at the moment. The OCP limit for any card is roughly 25% higher than the official TDP, so in the case of the GTX 590 this would translate into a 450W limit. This limit cannot currently be changed by the end user, so overclocking—particularly overvolting—risks triggering OCP. Depending on how well its generic detection mode works, it may limit extreme overclocking on all NVIDIA cards with the OCP hardware at the moment. Even in our own overclock testing we have some results that may be compromised by OCP, so it’s definitely something that needs to be considered.
Moving on, I’d like to hit upon marketing quickly. Normally the intended market and uses of most video cards is rather straightforward. For the 6990 AMD pushed raw performance Eyefinity (particularly 5x1P), while for the GTX 590 NVIDIA is pushing raw performance, 3D Vision Surround, and PhysX (even if dedicating a GF110 to PhysX is overkill). However every now and then something comes across that catches my eye. In this case NVIDIA is also using the GTX 590 to reiterate their support for SuperSample Anti-Aliasing support for DX9, DX10, and DX11. SSAA is easily the best hidden feature of the GTX 400/500 series, and it’s one that doesn’t get much marketing attention from NVIDIA. So it’s good to see it getting some attention from NVIDIA—certainly there’s no card better suited for it than the GTX 590.
Last, but not least, we have the test. For the launch of the GTX 590 NVIDIA is providing us with ForceWare 267.71 beta, which adds support for the GTX 590; there are no other significant changes. For cooling purposes we have removed the case fan behind PEG1 on our test rig—while an 11” card is short enough to fit it, it’s counterproductive for a dual-exhaust design. Finally, in order to better compare the GTX 590 to the 6990’s OC/Uber mode, we’ve given our GTX 590 a slight overclock. Our GTX 590 OC is clocked at 750/900, a 143MHz (23%) core and 47MHz (5%) memory overclock. Meanwhile the core voltage was raised from 0.912v to 0.987v. With the poor transparency of OCP’s operation however, we are not 100% confident that we haven’t triggered OCP, so please keep that in mind when looking at the overclocked results.
Update: April 2nd, 2011: Starting with the 267.91 drivers and release 270 drivers, NVIDIA has disabled overvolting on the GTX 590 entirely. This is likely a consequence of several highly-publicized incidents where GTX 590 cards died as a result of overvolting. Although it's unusual to see a card designed to not be overclockable, clearly this is where NVIDIA intends to be.
As an editorial matter we never remove anything from a published article so our GTX 590 OC results will remain. However with these newer drivers it is simply not possible to attain them.
|CPU:||Intel Core i7-920 @ 3.33GHz|
|Motherboard:||Asus Rampage II Extreme|
|Chipset Drivers:||Intel 188.8.131.525 (Intel)|
|Hard Disk:||OCZ Summit (120GB)|
|Memory:||Patriot Viper DDR3-1333 3x2GB (7-7-7-20)|
AMD Radeon HD 6990
AMD Radeon HD 6970
AMD Radeon HD 6950 2GB
AMD Radeon HD 6870
AMD Radeon HD 6850
AMD Radeon HD 5970
AMD Radeon HD 5870
AMD Radeon HD 5850
AMD Radeon HD 5770
AMD Radeon HD 4870X2
AMD Radeon HD 4870
EVGA GeForce GTX 590 Classified
NVIDIA GeForce GTX 580
NVIDIA GeForce GTX 570
NVIDIA GeForce GTX 560 Ti
NVIDIA GeForce GTX 480
NVIDIA GeForce GTX 470
NVIDIA GeForce GTX 460 1GB
NVIDIA GeForce GTX 460 768MB
NVIDIA GeForce GTS 450
NVIDIA GeForce GTX 295
NVIDIA GeForce GTX 285
NVIDIA GeForce GTX 260 Core 216
NVIDIA ForceWare 262.99
NVIDIA ForceWare 266.56 Beta
NVIDIA ForceWare 266.58
NVIDIA ForceWare 267.71
AMD Catalyst 10.10e
AMD Catalyst 11.1a Hotfix
AMD Catalyst 11.4 Preview
|OS:||Windows 7 Ultimate 64-bit|
Kicking things off as always is Crysis: Warhead, still one of the toughest games in our benchmark suite. Even three years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and for three years the answer was “no.” Dual-GPU halo cards can now play it at Enthusiast settings at high resolutions, but for everything else max settings are still beyond the grasp of a single card.
Crysis is often a bellwether for overall performance; if that’s the case here, then NVIDIA and the GTX 590 is not off to a good start at the all-important resolution of 2560x1600.
AMD gets some really good CrossFire scaling under Crysis, and as a result the 6990 has no problem taking the lead here. At a roughly 10% disadvantage it won’t make or break the game for NVIDIA, but given the similar prices they don’t want to lose too many games.
Meanwhile amongst NVIDIA’s own stable of cards, the stock GTX 590 ends up slightly underperforming the GTX 570 SLI. As we discussed in our look at theoretical numbers, the GTX 590’s advantage/disadvantage depends on what the game in question taxes the most. Crysis is normally shader and memory bandwidth heavy, which is why the GTX 590 never falls too far behind with its memory bandwidth advantage. EVGA’s mild overclock is enough to close the gap however, delivering identical performance. A further overclock can improve performance some more, but surprisingly not by all that much.
The minimum framerate ends up looking better for NVIDIA. The GTX 590 is still behind the 6990, but now it’s only by about 5%, while the EVGA GTX 590 squeezes past by all of .1 frame per second.
Up next is BattleForge, Electronic Arts’ free to play online RTS. As far as RTSes go this game can be quite demanding, and this is without the game’s DX11 features.
Most of the time BattleForge is a balanced game, and this situation proves it nicely with the GTX 590 and Radeon HD 6990 returning the exact same score of 99.6fps at 2560. Against the GTX 570 SLI however there’s nearly a 10% gap, almost perfectly matching the difference in shading performance between the two setups.
Unusually—and had we not run this multiple times we would not have believed it—our overclocked GTX 590 underperforms the EVGA GTX 590 with its mild factory overclock. This is one of the reasons we believe we’re blindly triggering OCP, as even if Battleforge were largely memory bandwidth limited our overclock still has a nearly 5% increase in memory bandwidth. The fact that scores are going down means something is amiss.
The next game on our list is 4A Games’ Metro 2033, their tunnel shooter released last year. In September the game finally received a major patch resolving some outstanding image quality issues with the game, finally making it suitable for use in our benchmark suite. At the same time a dedicated benchmark mode was added to the game, giving us the ability to reliably benchmark much more stressful situations than we could with FRAPS. If Crysis is a tropical GPU killer, then Metro would be its underground counterpart.
With single-GPU scores in Metro things are rather close, but with these dual-GPU cards scaling becomes a factor. As a result while the GTX 590 falls well behind the 6990 here, facing a sizable 15% gap in performance. The overclocked GTX 590 can just close the gap, but then the 6990 OC opens it back up just as quickly. In the meantime as shading performance is often the most critical factor in this benchmark, this explains why overclocking was so effective.
Ubisoft’s 2008 aerial action game is one of the less demanding games in our benchmark suite, particularly for the latest generation of cards. However it’s fairly unique in that it’s one of the few flying games of any kind that comes with a proper benchmark.
HAWX is a game that traditionally favors NVIDIA’s cards, so the outcome should not be a surprise. The GTX 590 takes a respectable 7% lead here, however at 179fps at 2560x1600 the difference is academic at best.
Civilization V is the latest incarnation in Firaxis Games’ series of turn-based strategy games. Civ5 gives us an interesting look at things that not even RTSes can match, with a much weaker focus on shading in the game world, and a much greater focus on creating the geometry needed to bring such a world to life. In doing so it uses a slew of DirectX 11 technologies, including tessellation for said geometry and compute shaders for on-the-fly texture decompression.
Civilization V is another title that normally works to NVIDIA’s benefit, and here we see the GTX 590’s biggest lead of the day. NVIDIA’s strong performance in this game with a single GPU directly translates into better performance for the dual-GPU GTX 590. Whether AMD will be able to find the rest of the magic that NVIDIA used to improve their Civ5 scores earlier this year remains to be seen.
Battlefield: Bad Company 2
Now approaching a year old, Bad Company 2 remains as one of the cornerstone DX11 games in our benchmark suite. Based on the Frostbite 1.5 engine, it will be replaced in complexity by the DX10+ only Frostbite 2 engine (and Battlefield 3) later this year. As BC2 doesn’t have a built-in benchmark or recording mode, here we take a FRAPS run of the jeep chase in the first act, which as an on-rails portion of the game provides very consistent results and a spectacle of explosions, trees, and more.
Historically Bad Company 2 favors two patterns in our tests: it favors shader speed, and it just favors AMD in general. Today is no exception, and while the GTX 590 can hit nearly 80fps, that’s still 10fps short of the 6990. Given that it’s normally shader bound our overclocked cards pick up the slack, but it’s not enough—not even the GTX 590 OC can reach the 6990, let alone an overclocked 6990.
Meanwhile our water benchmark gives us a good idea of what minimum framerates are like. Interestingly NVIDIA more than chips away at AMD’s lead here, and the GTX 590 and its overclocked variants top the charts. Given these scores it’s likely we’re approaching a non-GPU bottleneck in the game.
STALKER: Call of Pripyat
The third game in the STALKER series continues to build on GSC Game World’s X-Ray Engine by adding DX11 support, tessellation, and more. This also makes it another one of the highly demanding games in our benchmark suite.
For every game that makes the GTX 590 glow like Civlizaiton V there is a game like STALKER that more than wipes out any kind of trend. The GeForce GTX lineup simply gets manhandled here, making the 6990 the easy victor. We’ve seen STALKER be both shader and memory bound in the past, and it’s likely that’s what’s happening here. This is the most conclusive proof yet that 1.5GB of RAM per GPU may come up a bit short for the GTX 590, in which case if NVIDIA ever does a 3GB GTX 590 performance here should improve. In the meantime even a hefty overclock can’t get the GTX 590 to within 10fps of the stock 6990.
Codemasters’ 2009 off-road racing game continues its reign as the token racer in our benchmark suite. As the first DX11 racer, DiRT 2 makes pretty thorough use of the DX11’s tessellation abilities, not to mention still being the best looking racer we have ever seen.
Flip flopping once again, we’re back to a game that NVIDIA normally has an advantage in, leading to the GTX 590 taking a commanding lead. While there’s still room to grow here as evidenced by the increases due to overclocking, we’re quickly approaching a CPU limit. In the meantime NVIDIA could use some wins in some more shader-bound games, as clearing 100fps is also starting to clear the realm of a meaningful performance increase.
Mass Effect 2
Electronic Arts’ space-faring RPG is our Unreal Engine 3 game. While it doesn’t have a built in benchmark, it does let us force anti-aliasing through driver control panels, giving us a better idea of UE3’s performance at higher quality settings. Since we can’t use a recording/benchmark in ME2, we use FRAPS to record a short run.
Mass Effect 2 ends up being another game the GTX 590 handily beats the 6990 at. Unfortunately for NVIDIA this is another game that already gets high framerates to start with, so the difference is largely academic unless you turn on something else (e.g. SSAA) to take advantage of the performance afforded by such a high framerate.
Finally among our benchmark suite we have Wolfenstein, the most recent game to be released using the id Software Tech 4 engine. All things considered it’s not a very graphically intensive game, but at this point it’s the most recent OpenGL title available. It’s more than likely the entire OpenGL landscape will be thrown upside-down once id releases Rage later this year.
Wolfenstein ends up being another title the GTX 590 and 6990 are quite close at, largely because the game quickly becomes CPU Limited after a point. Still, the GTX 590 OC picks up almost 10%, and even the EVGA GTX 590 is good for around 2%.
Moving on from our look at gaming performance, we have our customary look at compute performance.
Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ5 includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes.
In the game world Civ5 benefits significantly from SLI and CrossFire. For our texture compression test however AFR is more a liability than a benefit. This doesn’t impact the game in any meaningful manner, but it’s an example of how SLI/CF aren’t always the right tool for the job. Unfortunately for both parties, with as few compute applications as there are today, almost none of them benefit from SLI/CF.
Our second GPU compute benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. While it’s still in beta, SmallLuxGPU recently hit a milestone by implementing a complete ray tracing engine in OpenCL, allowing them to fully offload the process to the GPU. It’s this ray tracing engine we’re testing.
SmallLuxGPU only currently supports ray tracing with one GPU, so all of our results are effectively proxies for what would be if the GTX 590 only had one GPU. Not surprisingly overclocks do wonders here, and NVIDIA’s strong compute architecture gives them an easy win. SLI/CF performance will become more important here when we upgrade to LuxMark for our next iteration of our benchmark suite, as LuxMark can handle multiple OpenCL drivers.
Our final compute benchmark is a Folding @ Home benchmark. Given NVIDIA’s focus on compute for Fermi, cards such as the GTX 590 can be particularly interesting for distributed computing enthusiasts, as two GPUs should be able to quickly retire work units.
Folding@Home doesn’t directly benefit from CF/SLI at all. However by dispatching one WU to each GPU it’s possible to double effective performance. With that taken into account the GTX 590 is quite an effective cruncher, particularly when we start looking at overclocking.
Power, Temperature, & Noise
Last but not least as always is our look at the power consumption, temperatures, and acoustics of the GTX 590. With a 365W TDP, expect everything here to be quite extreme.
|GeForce GTX 500 Series Voltage|
|GTX 570 Load||GTX 590 Load||GTX 590 OC||GTX 590 Idle|
In order to fit two GF110 GPUs onto a single card at a reasonable TDP, NVIDIA clearly had to do a lot of binning in order to get chips that would run at a low enough voltage. Our card runs at 0.912v for each GPU, which is only 0.012v more than the idle voltage of the rest of the GF110 cards. We really can’t emphasize enough just how low of a load voltage this is; it’s absolutely miniscule for GF110. This is also reflected in the idle voltage, which at 0.875v is 0.025v lower than the normal GF110 idle voltage, meaning GTX 590 should also idle better than the average GTX 580/570. NVIDIA’s best and lowest-leaking chips are clearly necessary in order to build the GTX 590.
Before we get too far, we wanted to mention OCP again. With NVIDIA’s OCP changes they have once again locked out our ability to disable OCP, and this time quite possibly for good as the generic OCP mechanism even catches our normal fallback programs. As our existing body of work for NVIDIA cards has OCP disabled we can’t reach parity with our existing Furmark results, thanks in large part to NVIDIA throttling Furmark far more than necessary. We’re going to go ahead and publish our results, but it’s not the same playing field.
As a result we’ve thrown in another game instead: HAWX. It’s not a very graphically complex game, but it’s actually one of the most power intensive games in our test suite, making it the best candidate for a Furmark replacement on such short notice.
At idle things don’t look too bad for the GTX 590. With NVIDIA’s voltage binning and efficiency gains from a single card, our total power consumption is 10W lower than the GTX 570 in SLI and 12W lower than the GTX 295. However even binned chips can’t completely erase GF110’s generally mediocre idle power consumption or lack of a super low power mode for the slave GPU, two areas where AMD has an advantage. As a result even with binning GTX 590 still draws 13W more than the 6990 at idle.
Power consumption under Crysis generally mirrors our expectations. NVIDIA’s power consumption should be similar to or higher than the 6990, and this is what we see. At 506W for the GTX 590 it’s actually only 10W more than the GTX 560 in SLI, even though performance is notably greater. Or alternatively it’s 50W under the GTX 570 in SLI. However it falls behind the 6990 by 15W here, which is compounded by the fact that the 6990 gets better performance in this game.
Meanwhile our OC results are quite a bit higher. Even though we’re still using a core voltage below any GTX 580 we have, at 0.987v, our GTX 590 reaches GTX 580 SLI power consumption numbers. Thus the good news is that the card can handle such power, the bad news is that it’s not possible to match the GTX 580 SLI’s performance even with this great of power consumption.
Our first instance of HAWX has the GTX 590 once again falling behind the 6990 by about 10W. EVGA’s factory overclock adds another 11W, and our own overclock brings that up to 588W. Unlike Crysis this is still well below the GTX 580 SLI, this time only missing the 6990OC by a few watts. Also worthy of note is that our HAWX overclock power draw is 28W lower than our Crysis overclock power draw, in contrast to both the stock and EVGA clocks drawing 30-35W more with HAWX. Again, this indicates the OCP has come into play, this time in a regular game.
This is probably the best graph for illustrating just how hard OCP throttles Furmark. Whereas AMD’s PowerTune does a very good job of keeping power consumption near the true power limit on the 6990 (in this case 375W), OCP is far more aggressive. This is why the GTX 590 consumes nearly 100W less, and why Furmark’s status as a worst-case scenario test is compromised with overly aggressive OCP. Even the GTX 590 OC with its voltage bump is throttled to the point where it consumes less power than the 6990.
Dual-GPU cards generally do poorly at idle temperatures, though a center-mounted fan improves the situation somewhat, which is the biggest reason that temperatures are down from the GTX 295. However such a fan configuration doesn’t cure all ills. As a result at 45C for idle we’re a bit on the warm side, but it’s nothing that’s a problem.
Not surprisingly, the GTX 590 is near the top of our Crysis temperature chart. Although we don’t publish the individual GPU temperatures, the hotter GPU in all cases on the GTX 590 was the GPU which exhausts externally, in this case incurring the penalty of having half that vent blocked by a DVI port. As a result the GTX 590 is always going to run a bit hotter than the 6990. We’re also seeing why 0.987v is about as high as you want to go on the GTX 590 OC—it’s within 5C of the thermal throttle.
HAWX largely mirrors Crysis here. The GTX 590 ends up being warmer than the 6990, and even the 6990 OC. The 590 OC is also 2C cooler here, thanks to OCP. 90C isn’t any worse than the GTX 580 in SLI, but then that’s about as warm as we want to allow things to get.
Again with Furmark being throttled, the GTX 590 looks unnaturally good here. Temperatures are below what we see in games.
We haven’t quite decided why the GTX 590 breaks 46dB here. It’s probably the use of a fan as opposed to a blower, but it could just as well be the fact that the GTX 590 effectively exhausts in an uneven fashion due to the half-blocked vent. In any case 46.8db is by no means loud, but this isn’t a whisper-silent card at idle.
These are the noise results collected during our Crysis temperature runs. Remember how we said NVIDIA was using the fact that they launched after AMD in order to claim that they had a quieter cooler? This is the proof. The GTX 590 simply embarrasses the 6990 here; it’s not even a contest. Make no mistake: 57.9dB is not a quiet card; we’re still looking at a dual-GPU monster, but it’s not the roaring monster that the 6990 is. On a subjective level I’d say things are even better than the numbers show—the GTX 590 is lower pitched than the 6990, which improves the perceived noise. Note that if we start overclocking + overvolting however, we largely erase the difference.
HAWX doesn’t make the GTX 590 look quite as good, but the difference is still there. The GTX 590 manages to stay just south of 60dB versus 65dB for the 6990. Perhaps the more impressive outcome however is that the GTX 590 is quieter than the GTX 580 in SLI, with the latter having the advantage of being two separate cards that can be independently cooled. We didn’t have time to grab the GTX 570 SLI or the 6870 in CrossFire, however I suspect the GTX 590 is louder than either of those. It’s also going to be louder than any single card setup (except perhaps the GTX 480)—even NVIDIA will tell you that the GTX 590 is louder than the GTX 580.
Finally we have our Furmark noise values. With extreme throttling everything is different for GTX 590, giving the results little in the way of usefulness.
Overall our power, temperature, and noise data proved to be quite interesting. On the one hand the GTX 590’s power consumption is a bit higher and temperatures a bit hotter than the comparable 6990. However the noise results are nothing short of remarkable—if NVIDIA can dissipate 350W+ of heat while at the same time making 5-7dB less noise, then it starts to become clear that AMD’s design has a serious weakness. The ultimate question is what did NVIDIA do right that AMD did not?
If my final thoughts start sounding like a broken record, it’s because once again a set of NVIDIA & AMD product launches have resulted in a pair of similarly performing products.
The crux of the matter is that NVIDIA and AMD have significantly different architectures, and once again this has resulted in cards that are quite equal on average but are all over the place in individual games and applications. If we just look at the mean performance lead/loss for all games at 2560, the GTX 590 is within 1% of the 6990; however, within those games there’s a great deal of variance. The GTX 590 does extremely well in Civilization V as we’d expect, along with DIRT 2, Mass Effect 2, and HAWX. Meanwhile in Crysis, BattleForge, and especially STALKER the GTX 590 comes up very short. Thus choosing the most appropriate card is heavily reliant what games are going to be played on it, and as a result there is no one card that can be crowned king.
Of the games NVIDIA does well in, only Civ5 is a game we’d classify as highly demanding; the rest are games where the GTX 590 is winning, but it’s also getting 100+ frames per second. Meanwhile on the games AMD does well at the average framerate is much lower, and all of the games are what we’d consider demanding. Past performance does not perfectly predict future performance, but there’s a good chance the 6990 is going to have a similar lead on future, similarly intensive games (at least as long as extreme tessellation isn’t a factor). So if you had to choose a card based on planning for future use as opposed to current games, the 6990 is probably the better choice from a performance perspective. Otherwise if you’re choosing based off of games you’d play today, you need to look at the individual games.
With that said, the wildcard right now is noise. Dual-GPU cards are loud, but the GTX 590 ends up being the quieter of the two by quite a bit; the poor showing of the 6990 ends up making the GTX 590 look a lot more reasonable than it necessarily is. The situation is a lot like the launch of the GTX 480, where we saw the GTX 480 take the performance crown, but at the cost of noise. The 6990’s performance advantage in shader-intensive games goes hand-in-hand with a much louder fan; whether this is a suitable tradeoff is going to be up to you to decide.
Ultimately we’re still looking at niche products here, so we shouldn’t lose sight of that fact. A pair of single-GPU cards in SLI/CF is still going to be faster and a bit quieter if not a bit more power hungry, all for the same price or less. The GTX 590 corrects the 6990’s biggest disadvantage versus a pair of single-GPU cards, but it ends up being no faster on average than a pair of $280 6950s, and slower than a pair of $350 GTX 570s. At the end of the day the only thing really threatened here is the GTX 580 SLI; while it’s bar none the fastest dual-GPU setup there is, at $1000 for a pair of the cards a quad-GPU setup is only another $400. For everything else, as was the case with the Radeon HD 6990, it’s a matter of deciding whether you want two video cards on one PCB or two PCBs.
Quickly, let's also touch upon factory overclocked/premium cards, since we had the chance to look at one today with the EVGA GeForce GTX 590 Classified. EVGA’s factory overclock isn’t anything special, and indeed if it were much less it wouldn’t even be worth the time to benchmark. Still, EVGA is charging 4% more for about as much of a performance increase, and then is coupling that with a lifetime warranty; ignore the pack-in items and you have your usual EVGA value-added fare, and all told it’s a reasonable deal, particularly when most other GTX 590s don’t come with that kind of warranty. Meanwhile EVGA’s overclocking utility suite is nice to see as always, though with the changes to OCP (and the inability to see when it kicks in) I’m not convinced GTX 590 is a great choice for end-user overclocking right now.
Update: April 2nd, 2011: Starting with the 267.91 drivers and release 270 drivers, NVIDIA has disabled overvolting on the GTX 590 entirely. This is likely a consequence of several highly-publicized incidents where GTX 590 cards died as a result of overvolting. Although it's unusual to see a card designed to not be overclockable, clearly this is where NVIDIA intends to be.
Finally, there’s still the multi-monitor situation to look at. We’ve only touched on a single monitor at 2560; with Eyefinity and NVIDIA/3D Vision Surround things can certainly change, particularly with the 6990’s extra 512MB of RAM per GPU to better handle higher resolutions. But that is a story for another day, so for that you will have to stay tuned…