No More Memory Bandwidth

Again, we have a 256 bit (4x 64 bit) memory interface to GDDR3 memory. The local graphics memory setup is not significantly different from the 6800 series of cards and only runs slightly faster at a 1.2 GHz effective data rate. This will work out in NVIDIA's favor as long as newer games continue to put a heavier burden on pixel shader processing. NVIDIA sees texture bandwidth as outweighing color and z bandwidth in the not too distant future. This doesn't mean the quest after ever increasing bandwidth will stop; it just means that the reasons we will need more bandwidth will change.

A good example of the changing needs of graphics cards is Half-Life 2. While the game runs very well even on older graphics cards like the 9800 Pro, the design is such that increased memory bandwidth is far less important than having more shader processing power. This is why we see the 6600GT cards significantly outperform the 9800 Pro. Even more interesting is that in our testing, we found that enabling 4xAA on a 9800 Pro didn't affect performance of HL2 much at all, while increasing the resolution from 1024x768 to 1280x1024 had a substantial impact on frame rates. If the HL2 model is a good example of the future of 3D engines, NVIDIA's decision to increase pixel processing power while leaving memory bandwidth for the future makes a lot of sense.

On an interesting side note, the performance tests in this article are mostly based around 1600x1200 and higher resolutions. Memory usage at 2048x1536 with 32bit color and z-buffer runs a solid 144MB for double buffered rendering with 4x AA. This makes a 256MB card a prerequisite for this setup, but depending on the textures, render targets and other local memory usage, 256MB may be a little short. PCI Express helps a little to alleviate any burden placed on system memory, but it is conceivable that some games could get choppier when swapping in and out large textures, normal maps, and the like.

We don't feel that ATI's 512MB X850 really brings anything necessary to the table, but with this generation we could start to see a real use for 512MB of local memory. MRTs, larger textures, normal maps, vertex textures, huge resolutions, and a lack of hardware compression for fp16 and fp32 textures all mean that we are on the verge of seeing games push memory usage way up. Processing these huge stores of data require GPUs powerful enough to utilize them efficiently. The G70 begins to offer that kind of power. For the majority of today's games, we are fine with 256MB of RAM, but moving into the future it's easy to see how more would help.

In addition to these issues, a 512MB card would be a wonderful fit for Dual-Link DVI. This would make the part a nice companion to Apple's largest Cinema Display (which is currently beyond the maximum resolution supported by the GeForce 7800 GTX). In case anyone is curious, a double buffered 4xAA 32bit color+z framebuffer at 2560x1600 is about 190MB.

In our briefings on G70, we were told that every part of the chip has been at least slightly updated from NV4x, but the general architecture and feature set is the same. There have been a couple of more significant updates as well, namely the increased performance capability of a single shader pipe and the addition of transparency antialiasing. Let's take a look at these factors right now.

The Pipeline Overview Inside The Pipes
Comments Locked

127 Comments

View All Comments

  • BenSkywalker - Wednesday, June 22, 2005 - link

    Derek-

    I wanted to offer my utmost thanks for the inclusion of 2048x1536 numbers. As one of the fairly sizeable group of owners of a 2070/2141 these numbers are enormously appreciated. As everyone can see 1600x1200x4x16 really doesn't give you an idea of what high resolution performance will be like. As far as the benches getting a bit messed up- it happens. You moved quickly to rectify the situation and all is well now. Thanks again for taking the time to show us how these parts perform at real high end settings.
  • blckgrffn - Wednesday, June 22, 2005 - link

    You're forgiven, by me anyway :) It is also the great editorial staff that makes Anandtech my homepage on every browser on all of my boxes!

    Nat
  • yacoub - Wednesday, June 22, 2005 - link

    #72 - Totally agree. Some Rome: Total War benchs are much needed - but primarily to see how the game's battle performance with large numbers of troops varies between AMD and Intel more so than NVidia and ATi, considering the game is highly CPU-limited currently in my understanding.
  • DerekWilson - Wednesday, June 22, 2005 - link

    Hi everyone,

    Thank you for your comments and feedback.

    I would like to personally apologize for the issues that we had with our benchmarks today. It wasn't just one link in the chain that caused the problems we had, but there were many factors that lead to the results we had here today.

    For those who would like an explanation of what happened to cause certain benchmark numbers not to reflect reality, we offer you the following. Some of our SLI testing was done forcing multi-GPU rendering on for tests where there was no profile. In these cases, the default mutli-GPU mode caused a performance hit rather than the increase we are used to seeing. The issue was especially bad in Guild Wars and the SLI numbers have been removed from offending graphs. Also, on one or two titles our ATI display settings were improperly configured. Our windows monitor properties, ATI "Display" tab properties, and refresh rate override settings were mismatched. This caused the card to render. Rather than push the display at a the pixel clock we expected, ATI defaulted to a "safe" mode where the game is run at the resolution requested, but only part of the display is output to the screen. This resulted in abnormally high numbers in some cases at resolutions above 1600x1200.

    For those of you who don't care about why the numbers ran the way they did, please understand we are NOT trying to hide behind our explanation as an excuse.

    We agree completely that the more important issue is not why bad numbers popped up, but that bad numbers made it into a live article. For this I can only offer my sincerest of apologies. We consider it our utmost responsibility to produce quality work on which people may rely with confidence.

    I am proud that our readership demands a quality above and beyond the norm, and I hope that that never changes. Everything in our power will be done to assure that events like this will not happen again.

    Again, I do apologize for the erroneous benchmark results that went live this morning. And thank you for requiring that we maintain the utmost integrity.

    Thanks,
    Derek Wilson
    Senior CPU & Graphics Editor
    AnandTech.com
  • Dmitheon - Wednesday, June 22, 2005 - link

    I have to say, while I'm am extremely pleased with nVidia doing a real launch, the product leaves me scratching my head. They priced themselves into an extremely small market, and effectively made their 6800 series the second tier performance cards without really dropping the price on them. I'm not going to get one, but I do wonder how this will affect the company's bottom line.
  • OrSin - Wednesday, June 22, 2005 - link

    I not tring to be a buthole but can we get a benchmark thats a RTS game. I see 10+ games benchmarks and most are FPS, the few that are not might as well be. Those RPG seems to use a silimar type engine.
  • stmok - Wednesday, June 22, 2005 - link

    To CtK's question : Nope, SLI doesn't work with dual-display. (Last I checked, Nvidia got 2D working, but NO 3D)...Rumours say its a driver issue, and Nvidia is working on it.

    I don't know any more than that. I think I'd rather wait until Nvidia are actually demonstrating SLI with dual or more displays, before I lay down any money.
  • yacoub - Wednesday, June 22, 2005 - link

    #60 - it's already to the point where it's turning people off to PC gaming, thus damaging the company's own market of buyers. It's just going to move more people to consoles, because even though PC games are often better games and much more customizable and editable, that only means so much and the trade-off versus price to play starts to become too imbalanced to ignore.
  • jojo4u - Wednesday, June 22, 2005 - link

    What was regarding the AF setting? I understand that it was set to 8x when AA was set to 4x?
  • Rand - Wednesday, June 22, 2005 - link

    I have to say I'm rather disappointed in the quality of the article. A number of apparently nonsensical benchmark results, with little to no analysis of most of the results.

    A complete lack of any low level theoretical performance results, no attempts to measure any improvements in efficiency of what may have caused such improvements.

    Temporal AA is only tested on one game with image quality examined in only one scene. Given how dramatically different games and genres utilize alpha textures your providing us with an awfully limited perspective of it's impact.

Log in

Don't have an account? Sign up now