It really doesn’t seem like it’s been all that long, but it’s been nearly a year and a half since NVIDIA has had a dual-GPU card on the market. The GeForce GTX 295 was launched in January of 2009, the first card based on the 55nm die shrink of the GT200 GPU. For most of the year the GTX 295 enjoyed bragging rights as the world’s fastest video card; however the launch of the Radeon HD 5000 series late in 2009 effectively put an end to the GTX 295’s run as a competitor.

Even with the launch of the GTX 400 series in March of 2010, a new dual-GPU card from NVIDIA remained the stuff of rumors—a number of rumors claimed we’d see a card based on GF10X, but nothing ever materialized. Without a dual-GPU card, NVIDIA had to settle for having the fastest single-GPU card on the market through the GTX 480, a market position worth bragging about, but one that was always shadowed by AMD’s dual-GPU Radeon HD 5970. Why we never saw a dual-GPU GTX 400 series card we’ll never know—historically NVIDIA has not released a dual-GPU card for every generation—but it’s a reasonable assumption that GF100’s high leakage made such a part unviable.

But at long last the time has come for a new NVIDIA dual-GPU card. GF100’s refined follow up, GF110, put the kibosh on leakage and allowed NVIDIA to crank up clocks and reduce power consumption throughout their GTX 500 lineup. This also seems to have been the key to making a dual-GPU card possible, as NVIDIA has finally unveiled their new flagship card: GeForce GTX 590. Launching a mere two weeks after AMD’s latest flagship card, the Radeon HD 6990, NVIDIA is gunning for their spot at the top back. But will they reach their goal? Let’s find out.

GTX 590 GTX 580 GTX 570 GTX 560 Ti
Stream Processors 2 x 512 512 480 384
Texture Address / Filtering 2 x 64/64 64/64 60/60 64/64
ROPs 2 x 48 48 40 32
Core Clock 607MHz 772MHz 732MHz 822MHz
Shader Clock 1214MHz 1544MHz 1464MHz 1644MHz
Memory Clock 853MHz (3414MHz data rate) GDDR5 1002MHz (4008MHz data rate) GDDR5 950MHz (3800MHz data rate) GDDR5 1002Mhz (4008MHz data rate) GDDR5
Memory Bus Width 2 x 384-bit 384-bit 320-bit 256-bit
VRAM 2 x 1.5GB 1.5GB 1.25GB 1GB
FP64 1/8 FP32 1/8 FP32 1/8 FP32 1/12 FP32
Transistor Count 2 x 3B 3B 3B 1.95B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $699 $499 $349 $249

Given that this launch takes place only two weeks after the Radeon HD 6990, it’s only natural to make comparisons to AMD’s recently launched dual-GPU card. In fact as we’ll see the cards are similar in a number of ways, which is a bit surprising given that the last time both companies had competing dual-GPU cards, the GTX 295 and Radeon HD 4870X2 were quite different in design.

But before we get too far, let’s start at the top with the specs. As is now customary for dual-GPU cards, NVIDIA has put together two of their top-tier GPUs and turned down the clocks in order to make a power/heat budget. In single card configurations we’ve seen GF110 hit 772MHz for the GTX 580, but that was for a card that can hit 300W load under the right/wrong circumstances. For the GTX 590 the clocks are down to 607MHz, while the functional unit count remains unchanged with everything enabled. Meanwhile memory clocks have also been reduced to the lowest clocks we’ve seen since the GTX 470: 853.5MHz (3414MHz data rate). NVIDIA has never hit very high memory clocks on the GTX 500 series, so it stands to reason that routing two 384-bit busses only makes the job harder.

All told at these clocks comparisons to the GTX 570 are more apt than comparisons to the GTX 580. Even compared to the GTX 570, per-GPU GTX 590 only has 83% the rasterization, 88% of the shading/texturing capacity and 99.5% the ROP capacity. Where the GTX 590 has the edge on the GTX 570 on a per-GPU basis is that with all of GF110’s functional units enabled and a 384-bit memory bus, it has 108% of the memory bandwidth and 120% the L2 cache. As a result while performance should be close to the GTX 570 on a per-GPU basis, it will fluctuate depending on the biggest bottleneck, with shading/texturing being among the worst scenarios, and L2 cache/memory bandwidth being among the best. Consequently, total performance should be close to the GTX 570 SLI.

As was the case with the 6990, NVIDIA is raising the limit on power consumption. The GTX 590 is rated for a TDP of 365W, keeping in mind that NVIDIA’s definition of TDP is the maximum power draw in “real world applications”. The closest metric from AMD would be their “typical gaming power”, for which the 6990 was rated for 350W. As a result the 6990 and GTX 590 should be fairly close in power consumption most of the time. Normally only Furmark and similar programs would generate a significant difference, but as we’ll see the rules have changed starting with NVIDIA’s latest drivers. Meanwhile for the idle TDP NVIDIA does not specify a value, but it should be under 40W.

With performance on paper that should rival the GTX 570 SLI—and by extension the Radeon HD 6990—it shouldn’t come as a big surprise that NVIDIA is pricing the GTX 590 to be competitive with AMD’s card. The MSRP of the GTX 590 will be $699, the same as where the 6990 launched two weeks ago. The card we’re looking at today, the EVGA GeForce GTX 590 Classified, is a premium package that will be a bit higher at $729. EVGA won’t be the only vendor offering a premium GTX 590 package, and while we don’t have a specific breakdown based on vendors, EVGA isn’t the only vendor with a premium package, so expect a range of prices. Ultimately for cards at the $699 MSRP, they will be competing with the 6990, the 6970CF, and the GTX 570 SLI.

As for availability, it’s a $700 card. NVIDIA isn’t expecting any real problems, but these are low-volume cards, so it’s possible and quite likely they’ll go in and out of stock.

March 2011 Video Card MSRPs
NVIDIA Price AMD
GeForce GTX 590
$700 Radeon HD 6990
$480
$320 Radeon HD 6970
$240 Radeon HD 6950 1GB
$190 Radeon HD 6870
$160 Radeon HD 6850
$150
$130
$110 Radeon HD 5770

Meet The EVGA GeForce GTX 590 Classified
POST A COMMENT

123 Comments

View All Comments

  • Ryan Smith - Thursday, March 24, 2011 - link

    One way or another we will be including multi-monitor stuff. The problem right now is getting ahold of a set of matching monitors, which will take some time to resolve. Reply
  • fausto412 - Thursday, March 24, 2011 - link

    also would be nice to test 1680x1050 on at least a couple of demanding games. illustrate to people who have 22" screens that these cards are a waste of money at their resolution. Reply
  • bigboxes - Thursday, March 24, 2011 - link

    It has been a waste for that low resolution since two generations ago. But you knew that. Troll... Reply
  • tynopik - Thursday, March 24, 2011 - link

    matching monitors might matter for image quality or something, but for straight benchmarking, who cares?

    surely you have 3 monitors capable of 1920x1080

    it's not like the card cares if one is 20" and another is 24"
    Reply
  • 7Enigma - Thursday, March 24, 2011 - link

    I don't understand this either. There is no need for anything fancy, heck you don't even need to have them actually outputting anything, just fool the drivers into THINKING they are driving multiple monitors! Reply
  • DanNeely - Thursday, March 24, 2011 - link

    I don't entirely agree. While it doesn't matter much for simple average FPS benches like Anandtech is currently doing, they fall well short of the maximum playable settings testing done by sites like HardOCP. Reply
  • strikeback03 - Thursday, March 24, 2011 - link

    Remember, the AT editors are spread all over. So while between them they certainly have at least 3 1920x1080/1200 monitors, Ryan (doing the testing) probably doesn't.

    Plus with different monitors wouldn't response times possibly be different? I'd imagine that would be odd in gaming.
    Reply
  • tynopik - Thursday, March 24, 2011 - link

    > Remember, the AT editors are spread all over. So while between them they certainly have at least 3 1920x1080/1200 monitors, Ryan (doing the testing) probably doesn't.

    This has been a need for a while, and it's not like this review was completely unexpected, so not sure why they don't have a multi-monitor setup yet

    > Plus with different monitors wouldn't response times possibly be different? I'd imagine that would be odd in gaming.

    Well that's sort of the point, they wouldn't actually be gaming, so who cares?
    Reply
  • Martin Schou - Thursday, March 24, 2011 - link

    I would have thought that the marketing departments of companies like Asus, Benq, Dell, Eizo, Fujitzu, HP, LaCie, LG, NEC, Philips, Samsung and ViewSonic would cream their pants at what is really very cheap PR.

    Supply sets of 3 or 5 1920x1080/1920x1200 displays and 3 or 5 2560x1440/2560x1600 displays in exchange for at least a full year's advertisement on a prominent tech website.

    If we use Dell as an example, they could supply a set of five U2211H and three U3011 monitors for a total cost of less than 5,900 USD per set. The 5,900 USD is what us regular people would have to pay, but in a marketing campaign it's really just a blip on the radar.

    Now, excuse me while I go dream of a setup that could pull games at 9,600x1080/5,400x1920 or 7,680x1600/4,800x2560 :D
    Reply
  • Ryan Smith - Friday, March 25, 2011 - link

    I'd just like to note that advertising is handled separately from editorial content. The two are completely compartmentalized so that ad buyers can't influence editorial control. Conversely as an editor I can't sell ad space. Reply

Log in

Don't have an account? Sign up now