NVIDIA's GeForce GTX 580: Fermi Refined
by Ryan Smith on November 9, 2010 9:00 AM ESTFinal Thoughts
Even though NVIDIA is only launching a single card today there’s a lot to digest, so let’s get to it.
Since the GeForce GTX 580 arrived in our hands last week, we’ve been mulling over how to approach it. It boils down to two schools of thought: 1) Do we praise NVIDIA for delivering a high performance single GPU card that strikes the right balance of performance and temperature/noise, or 2) Do we give an indifferent thumbs-up to NVIDIA for only finally delivering the card that we believe the GTX 480 should have been.
The answer we’ve decided is one of mild, but well earned praise. The GTX 580 is not the true next-generation successor to the GTX 480; it’s the GTX 480 having gone back in the womb for 7 months of development. Much like AMD, NVIDIA faced a situation where they were going to do a new product without a die shrink, and had limited options as a result. NVIDIA chose wisely, and came back with a card that is both decently faster and a refined GTX 480 at the same time.
With the GTX 480 we could recognize it as being the fastest single GPU card on the market, but only by recognizing the fact that it was hot and loud at the same time. For buyers the GTX 480 was a tradeoff product – sure it’s fast, but is it too hot/too loud for me? The GTX 580 requires no such tradeoff. We can never lose sight of the fact that it’s a high-end card and is going to be more power hungry, louder, and hotter than many other cards on the market, but it’s not the awkward card that the GTX 480 was. For these reasons our endorsement of the GTX 580 is much more straightforward, at least as long as we make it clear that GTX 580 is less an upgrade for GTX 480, and more a better upgrade for the GTX 285 and similar last-generation cards.
What we’re left with today is something much closer to the “traditional” state of the GPU market: NVIDIA has the world’s fastest single-GPU card, while AMD is currently nipping at their heels with multi-GPU products. Both the Radeon HD 5970 and Radeon HD 6870 CF are worthy competitors to the GTX 580 – they’re faster and in the case of the 6870 CF largely comparable in terms of power/temperature/noise. If you have a board capable of supporting a pair of 6870s and don’t mind the extra power it’s hard to go wrong, but only if you’re willing to put up with the limitations of a multi-GPU setup. It’s a very personal choice – we’d be willing to trade the performance for the simplicity of avoiding a multi-GPU setup, but we can’t speak for everyone.
So what’s next? A few different things. From the NVIDIA camp, NVIDIA is promising a quick launch of the rest of the GeForce 500 series. Given the short development cycles for NVIDIA we’d expect more refined GF10x parts, but this is very much a shot in the dark. Much more likely is a 3GB GTX 580, seeing as how NVIDIA's official product literature calls the GTX 580 the "GeForce GTX 580 1.5GB", a distinction that was never made for the GTX 480.
More interesting however will be what NVIDIA does with GF110 since it’s a more capable part than GF100 in every way. The GF100 based Quadros and Teslas were only launched in the last few months, but they’re already out of date. With NVIDIA’s power improvements in particular, this seems like a shoo-in for at least one improved Quadro and Tesla card. We also expect 500 series replacements for some of the GF100-based cards (with the GTX 465 likely going away permanently).
Meanwhile the AMD camp is gearing up for their own launches. The 6900 series is due to launch before the year is out, bringing with it AMD’s new Cayman GPU. There’s little we know or can say at this point, but as a part positioned above the 6800 series we’re certainly hoping for a slugfest. At $500 the GTX 580 is pricey (much like the GTX 480 before it), and while this isn’t unusual for the high-end market we wouldn’t mind seeing NVIDIA and AMD bring a high-intensity battle to the high-end, something that we’ve been sorely missing for the last year. Until we see the 6900 series we wouldn’t make any bets, but we can certainly look forward to it later this year.
160 Comments
View All Comments
RussianSensation - Wednesday, November 10, 2010 - link
Very good point techcurious. Which is why the comment in the review about having GTX580 not being a quiet card at load is somewhat misleading. I have lowered my GTX470 from 40% idle fan speed to 32% fan speed and my idle temperatures only went up from 38*C to 41*C. At 32% fan speed I can not hear the car at all over other case fans and Scythe S-Flex F cpu fan. You could do the same with almost any videocard.Also, as far as FurMark goes, the test does test all GPUs beyond their TDPs. TDP is typically not the most power the chip could ever draw, such as by a power virus like FurMark, but rather the maximum power that it would draw when running real applications. Since HD58/68xx series already have software and hardware PowerPlay enabled which throttles their cards under power viruses like FurMark it was already meaningless to use FurMark for "maximum" power consumption figures. Besides the point, FurMark is just a theoretical application. AMD and NV implement throttling to prevent VRM/MOSFET failures. This protects their customers.
While FurMark can be great for stability/overclock testing, the power consumption tests from it are completely meaningless since it is not something you can achieve in any videogame (can a videogame utilize all GPU resources to 100%? Of course not since there are alwasy bottlenecks in GPU architectures).
techcurious - Wednesday, November 10, 2010 - link
How cool would it be if nVidia added to it's control panel a tab for dynamic fan speed control based on 3 user selectable settings.1) Quiet... which would spin the fan at the lowest speed while staying just enough below the GPU temperature treshold at load and somewhere in the area of low 50 C temp in idle.
2) Balanced.. which would be a balance between moderate fan speed (and noise levels) resulting in slightly lower load temperatures and perhaps 45 C temp in idle.
3) Cool.. which would spin the fan the fastest, be the loudest setting but also the coolest. Keeping load temperatures well below the maximum treshold and idle temps below 40 C. This setting would please those who want to extend the life of their graphics card as much as possible and do not care about noise levels, and may anyway have other fans in their PC that is louder anyway!
Maybe Ryan or someone else from Anandtech (who would obviously have much more pull and credibility than me) could suggest such a feature to nVidia and AMD too :o)
BlazeEVGA - Wednesday, November 10, 2010 - link
Here's what I dig about you guys at AnandTech, not only are your reviews very nicely presented but you keep it relevant for us GTX 285 owners and other more legacy bound interested parties - most other sites fail to provide this level of complete comparison. Much appreciated. You charts are fanatastic, your analysis and commentary is nicely balanced and attention to detail is most excellent - this all makes for a more simplified evaluation by the potential end user of this card.Keep up the great work...don't know what we'd do without you...
Robaczek - Thursday, November 11, 2010 - link
I really liked the article but would like to see some comparison with nVidia GTX295..massey - Wednesday, November 24, 2010 - link
Do what I did. Lookup their article on the 295, and compare the benchmarks there to the ones here.Here's the link:
http://www.anandtech.com/show/2708
Seems like Crysis runs 20% faster at max res and AA. Is a 20% speed up worth $500? Maybe. Depends on how anal you are about performance.
lakedude - Friday, November 12, 2010 - link
Someone needs to edit this review! The acronym "AMD" is used several places when it is clear "ATI" was intended.For example:
"At the same time, at least the GTX 580 is faster than the GTX 480 versus AMD’s 6800/5800 series"
lakedude - Friday, November 12, 2010 - link
Never mind, looks like I'm behind the times...Nate007 - Saturday, November 13, 2010 - link
In the end we ( the gamers) who purchase these cards NEED to be be supporting BOTH sides so the AMD and Nvidia can both manage to stay profitable.Its not a question of who Pawns who but more importantly that we have CHOICE !!
Maybe some of the people here ( or MOST) are not old enough to remember the days when mighty
" INTEL" ruled the landscape. I can tell you for 100% fact that CPU's were expensive and there was no choice in the matter.
We can agree to disagree but in the END, we need AMD and we need NVIDIA to keep pushing the limits and offering buyers a CHOICE.
God help us if we ever lose one or the other, then we won't be here reading reviews and or jousting back and forth on who has the biggest stick. We will all be crying and complaining how expense it will be to buy a decent Video card.
Here's to both Company's ..............Long live NVIDIA & AMD !
Philip46 - Wednesday, November 17, 2010 - link
Finally, at the high end Nvidia delivers a much cooler and quiter, one GPU card, that is much more like the GTX 460, and less like the 480, in terms of performance/heat balance.I'm one where i need Physx in my games, and until now, i had to go with a SLI 460 setup for one pc and for a lower rig, a 2GB 460 GTX(for maxing GTA:IV out).
Also, i just prefer the crisp Nvidia desktop quality, and it's drivers are more stable. (and ATI's CCC is a nightmare)
For those who want everything, and who use Physx, the 580 and it's upcoming 570/560 will be the only way to go.
For those who live by framerate only, then you may want to see what the next ATI lineup will deliver for it's single GPU setup.
But whatever you choose, this is a GREAT thing for the industry..and the gamer, as Nvidia delivered this time with not just performance, but also lower temps/noise levels, as well.
This is what the 480, should have been, but thankfully they fixed it.
swing848 - Wednesday, November 24, 2010 - link
Again, Anand is all over the place with different video cards, making judgements difficult.He even threw in a GTS 450 and an HD 4870 here and there. Sometimes he would include the HD 5970 and often not.
Come on Anand, be consistent with the charts.