8800 GT 512MB vs. 256MB

When AMD released the Radeon HD 3800 series, NVIDIA responded by saying that a cheaper 256MB version of the 8800 GT would be on its way, priced below $200. NVIDIA delivered on part of its promise, we do have a 256MB 8800 GT in hand but it's not a sub-$200 card. The 8800 GT 256 we have is the Alpha Dog Edition XXX from XFX, priced at $229 not including a $10 mail in rebate. That's not too far off the mark but it's still not less than $200.

The XFX card we have runs at a 650MHz core clock but only has a 1.6GHz memory data rate. The reference 512MB card runs at 600MHz core/1.8GHz memory.

Quake Wars starts off showing us a trend we'll see quite often with the 256MB 8800 GT, it performs virtually identically to its 512MB brother until after 1600 x 1200 then there's a sharp drop off:

The performance hit isn't as pronounced when you turn on AA, instead you get a 10 - 20% hit across the board:

Bioshock shows the same thing, competitive performance up to 1600 x 1200 but at 1920 x 1200 the 512MB card has a 16% advantage, and a 60% advantage at 2560 x 1600. It is worth noting that neither card is really playable at 2560 x 1600 in Bioshock.

World in Conflict moves the choke point up to 1600 x 1200; the two cards behave similarly at 1280 x 1024, but the 512MB 8800 GT holds on to a 20% minimum advantage at 1600 x 1200 and grows it to 40% at 2560 x 1600.

Older titles like Half Life 2 and Oblivion show absolutely no difference between the two cards, showing us that this current wave of games and most likely all those to follow require larger than 256MB frame buffers. While 256MB could cut it in the Half Life 2 and Oblivion days, the same just isn't true any more.

What we have here is an 8800 Ultra that's $50 more for not much more gain, and a 256MB 8800 GT that's at least $70 cheaper for a lot less performance. If you plan on keeping this card for any length of time, it looks like 512MB is the way to go. Frame buffer demands of modern games are only going to increase, and it looks like what we're seeing here today is an indication that the transition to 512MB as a minimum for high end gaming performance is officially underway. The 768MB memory sizes of the 8800 GTX are still not totally required, but 512MB looks like the sweet spot.

8800 GTS 512 vs. 8800 GT 8800 GT 256 vs. Radeon HD 3870/3850
Comments Locked

56 Comments

View All Comments

  • chizow - Tuesday, December 11, 2007 - link

    This is probably the first time I've felt an AT review wasn't worth reading and definitely the first time I've said a review done by Anand wasn't worth reading. The conclusion is correct, but for very different reasons. There is no 10-15% advantage (as many would consider that significant enough a reason to pay $50 more), there is NO advantage of getting a G92 GTS over a G92 GT.
    http://www.firingsquad.com/hardware/nvidia_geforce...">Firing Squad Review

    When looking over this review, pay special attention to:

    Leadtek GeForce 8800 GT Extreme (680MHz core/1.0GHz memory)

    vs.

    XFX GeForce 8800 GTS 512MB XXX (678MHz core/986MHz memory)

    Almost no difference at all in performance.......
  • Acragas - Tuesday, December 11, 2007 - link

    Did you read all the way to the end of the Firing Squad review? Because at the end, they seem to leave no doubt that the 8800GTS 512 is certainly the superior card. I <3 EVGA's step up program.

    They conclude:

    Given the GeForce 8800 GTS 512MB’s outstanding performance though, this higher price tag is definitely justified. The 8800 GTS 512MB cards blazed through all of our benchmarks, with performance generally falling anywhere between the GeForce 8800 GT and the GeForce 8800 GTX, while a card that’s been overclocked can put up numbers that are higher than the GTX in some cases.

    If you’ve got $400 to spend on a graphics upgrade this Christmas, the GeForce 8800 GTS 512MB is without a doubt the card we recommend. In fact, we wouldn’t be surprised if the GeForce 8800 GTS 512MB ends up stealing sales away from the GeForce 8800 GTX.
  • chizow - Tuesday, December 11, 2007 - link

    Why would I need to read their conclusion when their data allows you to come to your own? I'm sure they were blinded by the stark contrast in their pretty graphs without realizing they showed there was virtually no difference in performance between the parts at the same clock speed.

    Granted, the dual-slot cooler would allow you to run at higher clock speeds, but for a $50-100 difference in price is a better cooler and 16SP and 8 tmu/tau that yield 0-2% difference in performance worth it?
  • zazzn - Tuesday, December 11, 2007 - link

    i foolishly also bought a 8800gts like 4 months ago and now the GTs are out stomping them and for cheaper. i feel like a fool and XFX doesnt offer a step up program next time i buy its a evga for sure...

    I m so sour right now about the situation consdering i needed a new psu from 450 to 600 which also cost me 150 and most likely wouldnt have needed it if i bought the gt now since it requires less power.

    how crap is that

    can zou post the results of a old 88vs a new 88gts
  • Kelly - Tuesday, December 11, 2007 - link

    Isn't the power consumption of 3870 vs 8800GT512 a bit odd compared to previous findings?

    Here are the numbers I am wondering about

    idle/load
    8800GT: 146/269 (difference:123)
    3870: 122/232 (difference:110)

    Compare this to
    http://www.anandtech.com/video/showdoc.aspx?i=3151...">http://www.anandtech.com/video/showdoc.aspx?i=3151...

    8800GT: 165/209 (difference:44)
    3870: 125/214 (difference:89)

    Or am I not doing the comparison correctly?

    Thanks for a nice review as always!
  • Spoelie - Tuesday, December 11, 2007 - link

    The original review results were a bit strange, the gap between the 3850/3870 was way too great for a simple clock bump between them, also DDR4 should consume less power than DDR3. So these values seem more right, the gap between idle and load is bigger because they used a quad core cpu in this article and a dual core in the previous one.
  • Khato - Tuesday, December 11, 2007 - link

    Well, the load results from this article in comparison to the previous bring light to a disturbing fact. If the definition of 'load' is a game and we're never CPU limited, then the performance of the graphics card is going to scale the CPU power usage accordingly, giving the impression that faster cards draw far more power than they actually do. On the flipside, if we're CPU limited (which might have been the case in the previous review) then CPU power is about constant, and the high end cards are idling more often, giving the impression that they're more efficient than they really are.

    It'd be interesting to see the % CPU utilization for each card.
  • trajan - Tuesday, December 11, 2007 - link

    I promise I'm not paid to say this, but I feel like the new GTS plus EVGA's step up program just saved me a load of cash. I (foolishly?) bought a superclocked EVGA 8800GTS 640mb card almost 3 months ago, right before the 8800GT came out. Yeah, bad timing. But when I checked online I still have 18 days left on my step-up.

    So, very ironically, I am upgrading from a $395 dollar card to a $360 card, paying $10 in shipping both ways. I don't get a refund, so I essentially will paid $420 for a $360 part, but what a huge upside -- I got a great card 3 months ago and am now getting a great upgrade almost free.

    I say "finally" in the subject because switching from the superclocked 8800GTS 640 to a 8800 GT just didn't seem worth it, especially given how much money I'd be losing .. I kept hoping something better would come around even if it cost more, since I can upgrade to any sub-$400 card just by paying shipping..
  • Viditor - Tuesday, December 11, 2007 - link

    My question is this...

    If an 8800 GT 512 is $300-$350, and 2 x HD3850s are a total of $358, how do they compare in performance (in other words, do the Xfired 3850s outperform the 8800GT 512, and if so by how much)?
  • chizow - Tuesday, December 11, 2007 - link

    That's basically what it comes down to with the G92 vs. G80. Another big difference between the G80s and G92s that the review failed to mention is the 24 vs. 16 ROP advantage G80 maintains over G92; a lead which the increased clock speeds can't make up for.

    Anyways, pretty clear the G92 does better in shader intensive games (newer ones) with its massive shader ops/sec advantage over the G80, but falls short when you enable AA or high resolutions where fillrate becomes more important.

    In the end I think the result is better for gamers but it doesn't look like there's a definitive winner this time around. Its basically trading performance at various settings/games but for the end-user, the benefit is that you can get great performance at a much better price by giving up the ultra high-end settings (1920+ w/AA), which at this point are borderline playable now anyways.

Log in

Don't have an account? Sign up now