It really doesn’t seem like it’s been all that long, but it’s been nearly a year and a half since NVIDIA has had a dual-GPU card on the market. The GeForce GTX 295 was launched in January of 2009, the first card based on the 55nm die shrink of the GT200 GPU. For most of the year the GTX 295 enjoyed bragging rights as the world’s fastest video card; however the launch of the Radeon HD 5000 series late in 2009 effectively put an end to the GTX 295’s run as a competitor.

Even with the launch of the GTX 400 series in March of 2010, a new dual-GPU card from NVIDIA remained the stuff of rumors—a number of rumors claimed we’d see a card based on GF10X, but nothing ever materialized. Without a dual-GPU card, NVIDIA had to settle for having the fastest single-GPU card on the market through the GTX 480, a market position worth bragging about, but one that was always shadowed by AMD’s dual-GPU Radeon HD 5970. Why we never saw a dual-GPU GTX 400 series card we’ll never know—historically NVIDIA has not released a dual-GPU card for every generation—but it’s a reasonable assumption that GF100’s high leakage made such a part unviable.

But at long last the time has come for a new NVIDIA dual-GPU card. GF100’s refined follow up, GF110, put the kibosh on leakage and allowed NVIDIA to crank up clocks and reduce power consumption throughout their GTX 500 lineup. This also seems to have been the key to making a dual-GPU card possible, as NVIDIA has finally unveiled their new flagship card: GeForce GTX 590. Launching a mere two weeks after AMD’s latest flagship card, the Radeon HD 6990, NVIDIA is gunning for their spot at the top back. But will they reach their goal? Let’s find out.

GTX 590 GTX 580 GTX 570 GTX 560 Ti
Stream Processors 2 x 512 512 480 384
Texture Address / Filtering 2 x 64/64 64/64 60/60 64/64
ROPs 2 x 48 48 40 32
Core Clock 607MHz 772MHz 732MHz 822MHz
Shader Clock 1214MHz 1544MHz 1464MHz 1644MHz
Memory Clock 853MHz (3414MHz data rate) GDDR5 1002MHz (4008MHz data rate) GDDR5 950MHz (3800MHz data rate) GDDR5 1002Mhz (4008MHz data rate) GDDR5
Memory Bus Width 2 x 384-bit 384-bit 320-bit 256-bit
VRAM 2 x 1.5GB 1.5GB 1.25GB 1GB
FP64 1/8 FP32 1/8 FP32 1/8 FP32 1/12 FP32
Transistor Count 2 x 3B 3B 3B 1.95B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $699 $499 $349 $249

Given that this launch takes place only two weeks after the Radeon HD 6990, it’s only natural to make comparisons to AMD’s recently launched dual-GPU card. In fact as we’ll see the cards are similar in a number of ways, which is a bit surprising given that the last time both companies had competing dual-GPU cards, the GTX 295 and Radeon HD 4870X2 were quite different in design.

But before we get too far, let’s start at the top with the specs. As is now customary for dual-GPU cards, NVIDIA has put together two of their top-tier GPUs and turned down the clocks in order to make a power/heat budget. In single card configurations we’ve seen GF110 hit 772MHz for the GTX 580, but that was for a card that can hit 300W load under the right/wrong circumstances. For the GTX 590 the clocks are down to 607MHz, while the functional unit count remains unchanged with everything enabled. Meanwhile memory clocks have also been reduced to the lowest clocks we’ve seen since the GTX 470: 853.5MHz (3414MHz data rate). NVIDIA has never hit very high memory clocks on the GTX 500 series, so it stands to reason that routing two 384-bit busses only makes the job harder.

All told at these clocks comparisons to the GTX 570 are more apt than comparisons to the GTX 580. Even compared to the GTX 570, per-GPU GTX 590 only has 83% the rasterization, 88% of the shading/texturing capacity and 99.5% the ROP capacity. Where the GTX 590 has the edge on the GTX 570 on a per-GPU basis is that with all of GF110’s functional units enabled and a 384-bit memory bus, it has 108% of the memory bandwidth and 120% the L2 cache. As a result while performance should be close to the GTX 570 on a per-GPU basis, it will fluctuate depending on the biggest bottleneck, with shading/texturing being among the worst scenarios, and L2 cache/memory bandwidth being among the best. Consequently, total performance should be close to the GTX 570 SLI.

As was the case with the 6990, NVIDIA is raising the limit on power consumption. The GTX 590 is rated for a TDP of 365W, keeping in mind that NVIDIA’s definition of TDP is the maximum power draw in “real world applications”. The closest metric from AMD would be their “typical gaming power”, for which the 6990 was rated for 350W. As a result the 6990 and GTX 590 should be fairly close in power consumption most of the time. Normally only Furmark and similar programs would generate a significant difference, but as we’ll see the rules have changed starting with NVIDIA’s latest drivers. Meanwhile for the idle TDP NVIDIA does not specify a value, but it should be under 40W.

With performance on paper that should rival the GTX 570 SLI—and by extension the Radeon HD 6990—it shouldn’t come as a big surprise that NVIDIA is pricing the GTX 590 to be competitive with AMD’s card. The MSRP of the GTX 590 will be $699, the same as where the 6990 launched two weeks ago. The card we’re looking at today, the EVGA GeForce GTX 590 Classified, is a premium package that will be a bit higher at $729. EVGA won’t be the only vendor offering a premium GTX 590 package, and while we don’t have a specific breakdown based on vendors, EVGA isn’t the only vendor with a premium package, so expect a range of prices. Ultimately for cards at the $699 MSRP, they will be competing with the 6990, the 6970CF, and the GTX 570 SLI.

As for availability, it’s a $700 card. NVIDIA isn’t expecting any real problems, but these are low-volume cards, so it’s possible and quite likely they’ll go in and out of stock.

March 2011 Video Card MSRPs
NVIDIA Price AMD
GeForce GTX 590
$700 Radeon HD 6990
$480
$320 Radeon HD 6970
$240 Radeon HD 6950 1GB
$190 Radeon HD 6870
$160 Radeon HD 6850
$150
$130
$110 Radeon HD 5770

Meet The EVGA GeForce GTX 590 Classified
Comments Locked

123 Comments

View All Comments

  • valenti - Thursday, March 24, 2011 - link

    Ryan, I commented last week on the 550 review. Just to echo that comment here: how are you getting the "nodes per day" numbers? Have you considered switching to a points per day metric? Very few people can explain what nodes per day are, and they aren't a very good measure for real world folding performance.

    (also, it seems like you should double the number for this review, since I'm guessing it was just ignoring the second GPU)
  • Ryan Smith - Thursday, March 24, 2011 - link

    Last year NVIDIA worked with the F@H group to provide a special version of the client for benchmark purposes. Nodes per day is how the client reports its results. Since points are arbitrary based on how the F@H group is scoring things, I can't really make a conversion.
  • poohbear - Thursday, March 24, 2011 - link

    Good to see that a $700 finally has a decent cooler! Why would somebody spend $700 & then go and hafta spend another $40 for an aftermarket cooler??? nvidia & AMD really need to just charge $750 and hve an ultra quiet card, these people in this price range are'nt gonna squabble over an extra $50 for petes sake!!!! it makes no sense that they skimp on the cooler at this price range! this is the top of the line where money isnt the issue!
  • Guspaz - Thursday, March 24, 2011 - link

    Let's get this straight, nVidia. Slapping two of your existing GPUs together does not make this a "next-generation card". Saying that you've been working on it for two years is also misleading; I doubt it took two years just to lay out the PCB to get two GPUs on a single board.

    SLI and Crossfire still feel like kludges. Take Crysis 2 for example. The game comes out, and I try to play it on my 295. It runs, but only on one GPU. So I go looking online; it turns out that there's an SLI profile update for the game, but only for the latest beta drivers. If you install those drivers *and* the profile update, you'll get the speed boost, but also various graphical corruption issues involving flickering of certain types of effects (that seem universal rather than isolated).

    After two goes at SLI (first dual 285s, next a 295), I've come to the conclusion that SLI is just not worth the headache. You'll end up dealing with constant compatibility issues.
  • strikeback03 - Thursday, March 24, 2011 - link

    And that is why people still buy the 6970/580, rather than having 2 cheaper cards in SLI like so many recommend.
  • JarredWalton - Thursday, March 24, 2011 - link

    For the record, I've had three goes at CrossFire (2 x 3870, 4870X2, and now 2 x 5850). I'm equally disappointed with day-of-release gaming results. But, if you stick to titles that are 2-3 months old, it's a lot better. (Yeah, spend $600 on GPUs just so you can wait two months after a game release before buying....)
  • Guspaz - Friday, March 25, 2011 - link

    I don't know about that, the original Crysis still has a lot of issues with SLI.
  • Nentor - Thursday, March 24, 2011 - link

    "For the GTX 590 launch, NVIDIA once again sampled partner cards rather than sampling reference cards directly to the press. Even with this, all of the cards launching today are more-or-less reference with a few cosmetic changes, so everything we’re describing here applies to all other GTX 590 cards unless otherwise noted.

    With that out of the way, the card we were sampled is the EVGA GeForce GTX 590 Classified, a premium GTX 590 offering from EVGA. The important difference from the reference GTX 590 is that GTX 590 Classified ships at slightly higher clocks—630/864 vs. 607/853.5—and comes with a premium package, which we will get into later. The GTX 590 Classified also commands a premium price of $729."

    Are we calling overclocked cards "more-or-less reference" cards now? That's a nice way to put it, I'll use it the next time I get stopped by a police officer. Sir, I was going more or less 100mph.

    Reference is ONE THING. It is the basis and does not waver. Anything that is not it is either overclocked or underclocked.
  • strikeback03 - Thursday, March 24, 2011 - link

    Bad example, as in the US at least your speedometer is only required to be accurate within 10%, meaning you can't get ticketed at less than 10% over the speed limit. This card is only overclocked by 4%. More importantly, they a) weren't sent a reference card, and b) included full tests at stock clocks. Would you rather they not review it since it isn't a reference card?
  • Nentor - Thursday, March 24, 2011 - link

    That is a good point actually, I didn't think of that.

    Maybe reject the card yes, but that is not going to happen. Nvidia is just showing who is boss by sending a non reference card. AT will have to swallow whatever Nvidia feeds them if they want to keep bringing the news.

Log in

Don't have an account? Sign up now