The 8800 GTS 320MB and The Test

Normally, when a new part is introduced, we would spend some time talking about number of pipelines, computer power, bandwidth, and all the other juicy bits of hardware goodness. But this time around, all we need to do is point back to our original review of the G80. Absolutely the only difference between the original 8800 GTS and the new 8800 GTS 320MB is the amount of RAM on board.

The GeForce 8800 GTS 320MB uses the same number of 32-bit wide memory modules as the 640MB version (grouped in pairs to form 5 64-bit wide channels). The difference is in density: the 640MB version uses 10 64MB modules, whereas the 320MB uses 10 32MB modules. That makes it a little easier for us, as all the processing power, features, theoretical peak numbers, and the like stay the same. It also makes it very interesting, as we have a direct comparison point through which to learn just how much impact that extra 320MB of RAM has on performance.

Here's a look at the card itself. There really aren't any visible differences in the layout or design of the hardware. The only major difference is the use of the traditional green PCB rather than the black of the recent 8800 parts we've seen.





Interestingly, our EVGA sample was overclocked quite high. Core and shader speeds were at 8800 GTX levels, and memory weighed in at 850MHz. In order to test the stock speeds of the 8800 GTS 320MB, we made use of software to edit and flash the BIOS on the card. The 576MHz core and 1350MHz shader clocks were set down to 500 and 1200 respectively, and memory was adjusted down to 800MHz as well. This isn't something we recommend people run out and try, as we almost trashed our card a couple times, but it got the job done.

The test system is the same as we have used in our recent graphics hardware reviews:

System Test Configuration
CPU: Intel Core 2 Extreme X6800 (2.93GHz/4MB)
Motherboard: EVGA nForce 680i SLI
Chipset: NVIDIA nForce 680i SLI
Chipset Drivers: NVIDIA nForce 9.35
Hard Disk: Seagate 7200.7 160GB SATA
Memory: Corsair XMS2 DDR2-800 4-4-4-12 (1GB x 2)
Video Card: Various
Video Drivers: ATI Catalyst 7.1
NVIDIA ForceWare 93.71 (G7x)
NVIDIA ForceWare 97.92 (G80)
Desktop Resolution: 2560 x 1600 - 32-bit @ 60Hz
OS: Windows XP Professional SP2


Index Does Size Matter?
Comments Locked

55 Comments

View All Comments

  • tacoburrito - Monday, February 12, 2007 - link

    With all the eye candy turned on, the 320mb card seems to be only on par with the previous gen 79xx cards, but costs almost twice as much. I'd much rather cough up the extra $200 and get the full GTS version.
  • DerekWilson - Monday, February 12, 2007 - link

    Actually, the 320MB card blows away the 7 series in our tests. Why would you say that it's only on par? At 16x12, the 8800 GTS 320MB is 60% faster, and the difference in performance only gets larger from there.
  • tacoburrito - Monday, February 12, 2007 - link

    With the exception of Half Life 2, at 4x AA, wouldn't you say that the 8800 GTS 320 is only marginally better than 7950 GT, but would costs twice a much?
  • tacoburrito - Monday, February 12, 2007 - link

    Whoops, I meant to say 7900 GTX
  • DerekWilson - Monday, February 12, 2007 - link

    From the context of the thread, I assumed you were talking about Oblivion.

    Without AA, the 8800 320MB is much better than the 7900 GTX. With AA, there is an argument to be made, but the price of the 7900 GTX (as Jarred pointed out) is higher.

  • JarredWalton - Monday, February 12, 2007 - link

    I'd be very curious to find out where you're seeing 7900 GTX cards for "half the price". I don't see any in stock when taking a quick look at major resellers, and our http://labs.anandtech.com/products.php?sfilter=462">Pricing Engine confirms that. I'm pretty sure the 7900 GTX is discontinued now, and prices never got below $400.
  • Wwhat - Monday, February 12, 2007 - link

    It still remains to be seen how DX10 games (or future OpenGL games that use geometry shaders?) run on the various incarnations of the new cards, you should have put that in the conclusion as a caveat, it's not just textures anymore you know.

    I don't thinks there's anything at all currently that uses geometry shaders, you wonder why some developer doesn't throw together a quick test utility, billions of people on the planet and nobody can do that little effort? geez.
    Surely someone at crytek or Id or something can write a small looping thing with a framecounter? anand should send out some mails, get someone on his feet.

  • DerekWilson - Monday, February 12, 2007 - link

    There are some dx10 sample apps that make use of geometry shaders ... I've been working on testing these, but it is more difficult than it may seem as FRAPS has trouble with DX10 apps.

    You do have a point though -- DX10 performance will be important. The problem is that we can't really make a recommendation based on DX10 performance.

    The 8 series parts do have more value than the 7 series and x1k series parts in that they support DX10. But this is as far as we can take it. Performance in the games we have does matter, and it is much more prudent to make a purchase only based on the information we know.

    Sure, if the cost and performance of an 8 series part is the same or very near some DX9 class hardware, the features and DX10 support are there to recommend it over the competition. But it's hard to really use this information in any other capacity without knowing how good their DX10 support really is.
  • Awax - Monday, February 12, 2007 - link

    The main point for me is the low impact of memory size on modern games.

    On previous generation game, like Quake4, developers had to use a lot of high resolution texture/bump map/lookup map to achieve advanced effect with the limited capacity in raw performances and flexibility of the cards available.

    With DX9 and more in DX10, the new way is to _CALCULATE_ things completely instead of having them interpolated with tricks using intermediary results or already computed lookup tables stored in textures.
  • DerekWilson - Monday, February 12, 2007 - link

    But new ways to calculate things will also benefit from having huge amounts of data to calculate things from.

    It's really hard to speculate on the direction DX10 games will take at this point. Certianly we will see more use of programmable features and a heavier impact on processing power. But memory usage will also increase. We'll just have to wait and see what happens.

Log in

Don't have an account? Sign up now