No More Memory Bandwidth

Again, we have a 256 bit (4x 64 bit) memory interface to GDDR3 memory. The local graphics memory setup is not significantly different from the 6800 series of cards and only runs slightly faster at a 1.2 GHz effective data rate. This will work out in NVIDIA's favor as long as newer games continue to put a heavier burden on pixel shader processing. NVIDIA sees texture bandwidth as outweighing color and z bandwidth in the not too distant future. This doesn't mean the quest after ever increasing bandwidth will stop; it just means that the reasons we will need more bandwidth will change.

A good example of the changing needs of graphics cards is Half-Life 2. While the game runs very well even on older graphics cards like the 9800 Pro, the design is such that increased memory bandwidth is far less important than having more shader processing power. This is why we see the 6600GT cards significantly outperform the 9800 Pro. Even more interesting is that in our testing, we found that enabling 4xAA on a 9800 Pro didn't affect performance of HL2 much at all, while increasing the resolution from 1024x768 to 1280x1024 had a substantial impact on frame rates. If the HL2 model is a good example of the future of 3D engines, NVIDIA's decision to increase pixel processing power while leaving memory bandwidth for the future makes a lot of sense.

On an interesting side note, the performance tests in this article are mostly based around 1600x1200 and higher resolutions. Memory usage at 2048x1536 with 32bit color and z-buffer runs a solid 144MB for double buffered rendering with 4x AA. This makes a 256MB card a prerequisite for this setup, but depending on the textures, render targets and other local memory usage, 256MB may be a little short. PCI Express helps a little to alleviate any burden placed on system memory, but it is conceivable that some games could get choppier when swapping in and out large textures, normal maps, and the like.

We don't feel that ATI's 512MB X850 really brings anything necessary to the table, but with this generation we could start to see a real use for 512MB of local memory. MRTs, larger textures, normal maps, vertex textures, huge resolutions, and a lack of hardware compression for fp16 and fp32 textures all mean that we are on the verge of seeing games push memory usage way up. Processing these huge stores of data require GPUs powerful enough to utilize them efficiently. The G70 begins to offer that kind of power. For the majority of today's games, we are fine with 256MB of RAM, but moving into the future it's easy to see how more would help.

In addition to these issues, a 512MB card would be a wonderful fit for Dual-Link DVI. This would make the part a nice companion to Apple's largest Cinema Display (which is currently beyond the maximum resolution supported by the GeForce 7800 GTX). In case anyone is curious, a double buffered 4xAA 32bit color+z framebuffer at 2560x1600 is about 190MB.

In our briefings on G70, we were told that every part of the chip has been at least slightly updated from NV4x, but the general architecture and feature set is the same. There have been a couple of more significant updates as well, namely the increased performance capability of a single shader pipe and the addition of transparency antialiasing. Let's take a look at these factors right now.

The Pipeline Overview Inside The Pipes
Comments Locked

127 Comments

View All Comments

  • swatX - Wednesday, June 22, 2005 - link

    THE SLI is meant to played on high res.. if you got money to brn on SLI then i am damn sure you got money to burn on a 19" monitor ;)
  • CtK - Wednesday, June 22, 2005 - link

    can Dual Display be used in SLi mode??
  • Johnmcl7 - Wednesday, June 22, 2005 - link

    In general 6600GT SLI performance seems a bit random, in some cases it's really good as with BF2 but in others not as good as a 6800GT.

    John
  • bob661 - Wednesday, June 22, 2005 - link

    Anyone notice how a SLI'd 6600GT is just as quick as a 6800 Ultra in BF2?
  • R3MF - Wednesday, June 22, 2005 - link

    give me some details on the 7800 and 7800GT

    what, when, and how much?
  • bob661 - Wednesday, June 22, 2005 - link

    #59
    I am more eager to see how the new midrange cards will perform than these parts but if I had a spare $600 I would jump all over this.
  • bob661 - Wednesday, June 22, 2005 - link

    #56
    LMAO!!!!
  • bob661 - Wednesday, June 22, 2005 - link

    #44
    And I thought paying $350 for a video cards was too much then or even before than there was the $200 high end and before that the $100 high end. I balked at all of those prices but I understood why they were prices as such and didn't bitch everytime the costs went up. The bar keeps being raised and the prices go with it. Inflation, more features and the fact that most of us here can afford $350 video cards pushes the cost of new PREMIUM cards higher by the year. It's only going to go up unless either people quit buying the high end cards or the manufatucrers find a magical process to reduce costs dramatically.
  • Johnmcl7 - Wednesday, June 22, 2005 - link

    You're quite right, there's always a premium for the best, I don't see any difference here, no-one is being forced to buy this graphics card. As usual, I'll wait until something offers me a better price/performance ratio over my current X850XT/6800 Ultra duo.

    John
  • Avalon - Wednesday, June 22, 2005 - link

    Seems to be a problem with the last Knights of the Old Republic 2 graph. Both 7800GTX setups are "performing" less than all the other cards benched. Despite all the mistakes, it still seems like I was right in that this card is made for those who play at high resolutions. Anyone with an R420 of NV40 based card that plays at 16x12 or less should probably not bother upgrading, unless they feel the need to.

Log in

Don't have an account? Sign up now