No More Memory Bandwidth

Again, we have a 256 bit (4x 64 bit) memory interface to GDDR3 memory. The local graphics memory setup is not significantly different from the 6800 series of cards and only runs slightly faster at a 1.2 GHz effective data rate. This will work out in NVIDIA's favor as long as newer games continue to put a heavier burden on pixel shader processing. NVIDIA sees texture bandwidth as outweighing color and z bandwidth in the not too distant future. This doesn't mean the quest after ever increasing bandwidth will stop; it just means that the reasons we will need more bandwidth will change.

A good example of the changing needs of graphics cards is Half-Life 2. While the game runs very well even on older graphics cards like the 9800 Pro, the design is such that increased memory bandwidth is far less important than having more shader processing power. This is why we see the 6600GT cards significantly outperform the 9800 Pro. Even more interesting is that in our testing, we found that enabling 4xAA on a 9800 Pro didn't affect performance of HL2 much at all, while increasing the resolution from 1024x768 to 1280x1024 had a substantial impact on frame rates. If the HL2 model is a good example of the future of 3D engines, NVIDIA's decision to increase pixel processing power while leaving memory bandwidth for the future makes a lot of sense.

On an interesting side note, the performance tests in this article are mostly based around 1600x1200 and higher resolutions. Memory usage at 2048x1536 with 32bit color and z-buffer runs a solid 144MB for double buffered rendering with 4x AA. This makes a 256MB card a prerequisite for this setup, but depending on the textures, render targets and other local memory usage, 256MB may be a little short. PCI Express helps a little to alleviate any burden placed on system memory, but it is conceivable that some games could get choppier when swapping in and out large textures, normal maps, and the like.

We don't feel that ATI's 512MB X850 really brings anything necessary to the table, but with this generation we could start to see a real use for 512MB of local memory. MRTs, larger textures, normal maps, vertex textures, huge resolutions, and a lack of hardware compression for fp16 and fp32 textures all mean that we are on the verge of seeing games push memory usage way up. Processing these huge stores of data require GPUs powerful enough to utilize them efficiently. The G70 begins to offer that kind of power. For the majority of today's games, we are fine with 256MB of RAM, but moving into the future it's easy to see how more would help.

In addition to these issues, a 512MB card would be a wonderful fit for Dual-Link DVI. This would make the part a nice companion to Apple's largest Cinema Display (which is currently beyond the maximum resolution supported by the GeForce 7800 GTX). In case anyone is curious, a double buffered 4xAA 32bit color+z framebuffer at 2560x1600 is about 190MB.

In our briefings on G70, we were told that every part of the chip has been at least slightly updated from NV4x, but the general architecture and feature set is the same. There have been a couple of more significant updates as well, namely the increased performance capability of a single shader pipe and the addition of transparency antialiasing. Let's take a look at these factors right now.

The Pipeline Overview Inside The Pipes


View All Comments

  • mrdeez - Thursday, June 23, 2005 - link

    Please post benches with resolutions that are commonly used or this article becomes a workstatin graphics card article and not one for gamers....I mean really 2046x3056 or whatever the who games at that res??? While this card is powerful it should be mentioned that unless you use a res over 1600x12000 this card is those were some ridculous resolutions though.......and again post some benches with 1280x1024 for us lcd'ers.....
  • Shinei - Thursday, June 23, 2005 - link

    #95: Did you pay to read this article? I know I didn't...

    #94: I guess you missed the part where they said that all resolutions below 1600x1200 were essentially identical in performance? If you only play in 1024x768, why are you reading a review about a $600 video card--go buy a 6600GT instead.
  • jeffrey - Wednesday, June 22, 2005 - link


    Has the staff at Anandtech not ever heard of "Vacation Coverage"?

    The excuse of your Web Editor being on vacation is, in reality, an admission of improper planning.

    A major hardware site that is dedicated to cutting-edge technology should have planned better. New high-end GPU launches happen by NVIDIA only about 2-3 times a year at most.

    This was one of the HUGE launches of the year and it was messed-up becuase the team didn't feel it was important enough to get some help for the article. There was damage done to Anandtech today due to the article errors and due to the casual admission in post #83 about not caring to properly cover a "Super Bowl" type of product launch today.

    Save your apologies to the message board, give them to Anand.
  • geekfool - Wednesday, June 22, 2005 - link

    How about benchmarking some useful resolutions? This review was essentially useless. Reply
  • JarredWalton - Wednesday, June 22, 2005 - link

    86 - Trust me, most of us other editors saw the article, and quite a few of us offered a helping hand. NDAs a a serious pain in the rear, though. Derek was busy pulling all nighters and functioning on limited sleep for several days, and just getting the article done is only half the battle. Getting the document and results into the document engine for a large article with a lot of graphs can take quite a while and is an error prone process.

    The commentary on the gaming benchmarks, for instance, was written in one order and posted in a different order. So please pardon the use of "this is another instance" or "once again" when we're talking about something for the first time. Anyway, I've got a spreadsheet of the benchmarks from early this morning, and other than non-functional SLI in a few games, the numbers appeared more or less correct. The text also didn't have as many typos. Crunch time and getting the final touches put on a major article isn't much fun.

    Thankfully, I'm just the SFF/Guide man, so I'm rarely under NDA pressure. ;)
  • robp5p - Wednesday, June 22, 2005 - link

    I would love to see someone start benchmarking in widescreen resolutions! 1920x1200 begs for a fast video card like this. As was pointed out, the only real benefits of the 7800 come at high resolutions, and many people buying high resolution monitors these days are getting widescreen LCD's

    and btw, my 2405fpw is sitting in a box right next to me in the office, begging me to open it up before I get home...this thing will be fun to get home on the subway
  • patriot336 - Wednesday, June 22, 2005 - link

    Where is the Monarch and Tiger love?


    Both are 599.00$
  • BikeDude - Wednesday, June 22, 2005 - link

    $600 for a card that only features single-link DVI outputs? Yeah right, pull the other one nVidia!

  • ta2 - Wednesday, June 22, 2005 - link

    As a player of eve-online, I can tell you that the game is entirely CPU dependent. On that matter, it will 100% any CPU you have. I mean ANY CPU. Also for the testing, you should use 1600x1200 max AA and AF and go into an area with many player ships on eve-online. I guarantee you will not get 60 FPS. Impractical and unscientific, but would still give better results than this review. Reply
  • TinyTeeth - Wednesday, June 22, 2005 - link

    I am very impressed of the performance of the new chip. Nvidia seems to have fixed the problems SLI had during the 6800 generation.

    I am also pleased they have managed to deliver the cards so quickly. That also puts some pressure on ATI.

Log in

Don't have an account? Sign up now