NVIDIA can be a very predictable company at times. It’s almost unheard of for them to release only a single product based on a high-end GPU, so when they released the excellent GeForce GTX 580 last month we knew it was only a matter of time until additional GTX 500 series cards would join their product lineup.

Now less than a month after the launch of the GTX 580 that time has come. Today NVIDIA is launching the GeForce GTX 570, the second card to utilize their new GF110 GPU. As the spiritual successor to the GTX 470 and very much the literal successor to the GTX 480, the GTX 570 brings the GTX 580’s improvements to a lower priced, lower performing card. Furthermore at $350 it serves to fill in the sizable gap between NVIDIA’s existing GTX 580 and GTX 470 cards.

So how does NVIDIA’s latest and second greatest stack up, and is it a worthy sibling to the GTX 580? Let’s find out.

  GTX 580 GTX 570 GTX 480 GTX 470
Stream Processors 512 480 480 448
Texture Address / Filtering 64/64 60/60 60/60 56/56
ROPs 48 40 48 40
Core Clock 772MHz 732MHz 700MHz 607MHz
Shader Clock 1544MHz 1464MHz 1401MHz 1215MHz
Memory Clock 1002MHz (4008MHz data rate) GDDR5 950MHz (3800MHz data rate) GDDR5 924MHz (3696MHz data rate) GDDR5 837MHz (3348MHz data rate) GDDR5
Memory Bus Width 384-bit 320-bit 384-bit 320-bit
Frame Buffer 1.5GB 1.25GB 1.5GB 1.25GB
FP64 1/8 FP32 1/8 FP32 1/8 FP32 1/8 FP32
Transistor Count 3B 3B 3B 3B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $499 $349 ~$400 ~$240

The GTX 570 is likely the closest thing we’ll see to a GF110 version of GTX 480 – or any other GF100 card for that matter. With the higher yields afforded by the GF110 design and TSMC’s process improvements, we’ve already seen NVIDIA go for a fully operational GF110 design in the GTX 580, so the GTX 570 works from there. The end result is a melding of the GTX 480’s shader count with the GTX 470’s ROP count and memory bus, and with a clockspeed a bit over GTX 480 and well over GTX 470, performance is much closer to the GTX 480 than the GTX 470.

With 15 of 16 SMs enabled, the GTX 570 matches the GTX 480 at a total of 480 active CUDA Cores and 60 texture units. The core clock is 732MHz, 32MHz (4.5%) over the GTX 480 in order to make up for the reduced ROP/memory blocks and to take advantage of GF110’s lower leakage at higher clocks (as a minor aside, why the strange clocks lately? Look to the PLL). Meanwhile the memory system uses the same 320bit (64bit x 5) memory bus & 10 memory chip configuration we saw on the GTX 470, however this time the memory clock is up to 950MHz (3.8GHz data rate), 113MHz (13.5%) over the GTX 470. Memory clocks are also marginally faster than the GTX 480 by 26MHz, but this isn’t nearly enough to make up for the narrower memory bus. Finally we have the ROPs, which share an existence with both the core and memory subsystems and split the difference – it’s the same 60 ROPs and 640KB of L2 cache as the GTX 470, but because the ROPs run on the core clock they’re running 125MHz (20.5%) faster than the GTX 470.

Since it’s based on GF110, GTX 570 also shares the same architectural enhancements we first saw in the GTX 580. This means GTX 570 can retire twice as many FP16 texels per clock as GTX 480, and it also features NVIDIA’s improved Z-culling system. For the GTX 570 this helps to further close the potential performance gap between the GTX 570 and GTX 480 that results from the lower ROP count and narrower memory bus. Do note however that compared to the GTX 470 the overall improvements are asymmetric: we’re looking at around a 30% theoretical improvement in shading/compute/texture performance, but only a 13.5% improvement in memory bandwidth, so unlike the GTX 580 and its balanced approach, the difference on the GTX 570 is going to be greater on shader-bound games and applications, and lesser when we’re memory bandwidth limited.

As the GTX 470’s successor, the GTX 570 generally fits in the same power and noise profile as the GTX 470. NVIDIA puts the TDP at 219W – a mere 4W over GTX 470 – highlighting the fact that NVIDIA has gone for maximizing performance within their selected power profile for the GTX 570, versus increasing performance but also decreasing power consumption to the GTX 580. The card is otherwise identical to the GTX 580 – the GTX 570 uses the same PCB, the same vapor chamber cooler, and the same shroud as the GTX 580.

NVIDIA is putting the MSRP for the card at $349, a price that in recent weeks has been vacant as neither NVIDIA or AMD had a product to put between the 480/470 and 5970/5870 respectively. Coming from the top-end of the market this is more or less a nice price drop for GTX 480-like performance, but it also means the Radeon 5870 and GTX 470 are the GTX 570’s value threats – the 570’s a good bit faster, but they’re nearly $100 cheaper. The only other competition for the GTX 570 for now will be the GTX 460 1GB SLI and the Radeon HD 6850 CF.

Today’s launch should be a hard launch. Going in to the GTX 580 launch we had our doubts that NVIDIA could have so many GF110 products ready on such short notice, but they were able to prove us wrong there and we’re willing to take them at face value on this. Based on their own estimates and the lower price of the GTX 570 we’d expect some cards to sell out, but availability shouldn’t be an issue.

Finally, with the launch of the GTX 570, NVIDIA’s lineup will be shifting. GF110 is a very effective replacement for GF100 and NVIDIA will be looking to phase out GF100 cards as quickly as they reasonably can. The GTX 470 will still be around for quite some time (all indications are that NVIDIA still has a lot of GF100 chips left) but GTX 480’s days are numbered.

Winter 2010 Video Card MSRPs
NVIDIA Price AMD
$500  
  $470 Radeon HD 5970
$410  
$350  
 
$250 Radeon HD 5870
$240 Radeon HD 6870
$180-$190 Radeon HD 6850
Meet the GTX 570
Comments Locked

54 Comments

View All Comments

  • TheHolyLancer - Tuesday, December 7, 2010 - link

    likely because when the 6870s came out they included an FTW edition of the 460 and was hammered? Not to mention in their own guild lines they said no OCing in launch articles.

    If they do do OC comp, most likely in a special article, possibly with retail brought samples rather than sent demos...
  • Ryan Smith - Tuesday, December 7, 2010 - link

    As a rule of thumb I don't do overclock testing with a single card, as overclocking is too variable. I always wait until I have at least 2 cards to provide some validation to our results.
  • CurseTheSky - Tuesday, December 7, 2010 - link

    I don't understand why so many cards still cling to DVI. Seeing that Nvidia is at least including native HDMI on their recent generations of cards is nice, but why, in 2010, on an enthusiast-level graphics card, are they not pushing the envelope with newer standards?

    The fact that AMD includes DVI, HDMI, and DisplayPort natively on their newer lines of cards is probably what's going to sway my purchasing decision this holiday season. Something about having all of these small, elegant, plug-in connectors and then one massive screw-in connector just irks me.
  • Vepsa - Tuesday, December 7, 2010 - link

    Its because most people still have DVI for their desktop monitors.
  • ninjaquick - Tuesday, December 7, 2010 - link

    DVI is a very good plug man, I don't see why you're hating on it.
  • ninjaquick - Tuesday, December 7, 2010 - link

    I meant to reply to OP.
  • DanNeely - Tuesday, December 7, 2010 - link

    Aside from apple almost noone uses DP. Assuming it wasn't too late in the life cycle to do so, I suspect that the new GPU used in the 6xx series of cards next year will have DP support so nvidia can offer many display gaming on a single card, but only because a single DP clockgen (shared by all DP displays) is cheaper to add than 4 more legacy clockgens (one needed per VGA/DVI/HDMI display).
  • Taft12 - Tuesday, December 7, 2010 - link

    Market penetration is just a bit more important than your "elegant connector" for an input nobody's monitor has. What a poorly thought-out comment.
  • CurseTheSky - Tuesday, December 7, 2010 - link

    Market penetration starts by companies supporting the "cutting edge" of technology. DisplayPort has a number of advantages over DVI, most of which would be beneficial to Nvidia in the long run, especially considering the fact that they're pushing the multi-monitor / combined resolution envelope just like AMD.

    Perhaps if you only hold on to a graphics card for 12-18 months, or keep a monitor for many years before finally retiring it, the connectors your new $300 piece of technology provides won't matter to you. If you're like me and tend to keep a card for 2+ years while jumping on great monitor deals every few years as they come up, it's a different ballgame. I've had DisplayPort-capable monitors for about 2 years now.
  • Dracusis - Tuesday, December 7, 2010 - link

    I invested just under $1000 in a 30" professional 8-bit PVA LCD back in 2006 that is still better than 98% of the crappy 6-bit TN panels on the market. It has been used with 4 different video cards, supports DVI, VGA, Component HD and Composite SD. Has an ultra wide color gamut (113%), great contrast, matt screen with super deep blacks and perfectly uniform backlighting along with mem card readers and USB ports.

    Display Port, not any other monitor on the market offers me absolutely nothing new or better in terms of visual quality or features.

    If you honestly see an improvement in quality spending $300 ever 18 months on a new "value" displays then I feel sorry for you, you've made some poorly informed choices and wasted a lot of money.

Log in

Don't have an account? Sign up now