Final Words

Back when Sony announced the specifications of the PlayStation 3, everyone asked if it meant the end of PC gaming. After all Cell looked very strong and NVIDIA's RSX GPU had tremendous power. We asked NVIDIA how long it would take until we saw a GPU faster than the RSX. Their answer: by the time the PS3 ships. So congratulations to NVIDIA for making the PS3 obsolete before it ever shipped, as G80 is truly a beast.

A single GeForce 8800 GTX is more powerful overall than a 7900 GTX SLI configuration and even NVIDIA's mammoth Quad SLI. Although it's no longer a surprise to see a new generation of GPU outperform the previous generation in SLI, the sheer performance we're able to attain because of G80 is still breathtaking. Being able to run modern day games at 2560x1600 at the highest in-game detail settings completely changes the PC gaming experience. It's an expensive proposition, sure, but it's like no other; games just look so much better on a 30" display at 2560x1600 that it makes playing titles at 1600x1200 seem just "ok". We were less impressed by the hardware itself than by gaming at 2560x1600 with all the quality settings cranked all the way up in every game we tried, and that is saying quite a lot. And in reality, that's what it's all about anyway: delivering quality and performance at levels never before thought possible.

Architecturally, G80 is a gigantic leap from the previous generation of GPUs. It's the type of leap in performance that's akin to what we saw with the Radeon 9700 Pro, and given the number of 9700 Pro-like launches we've seen, they are rare. Like 9700 Pro, we are able to enable features that improve image quality well beyond the previous generation, and we are able to run games smoothly at resolutions higher than we could hope for. And, like 9700 Pro, the best is yet to come.

With developers much more acclimated to programmable shader hardware, we expect to see a faster ramp in the availability of advanced features enabled by DirectX 10 class hardware. This is more because of the performance improvements of DX10 than anything else: game developers can create just about the same effects in SM3.0 that they can with SM4.0. The difference is that DX9 performance would be so low that features won't be worth implementing. This is different from the DX8 to DX9 transition where fully programmable shaders enabled a new class of effects. This time, DX10 simply removes the speed limit and straps on afterburners. The only fly in the ointment for DirectX 10 is the requirement that users run Windows Vista. Unfortunately, that means developers are going to be stuck with supporting both DX9 and DX10 hardware in their titles for some time, unless they simply want to eliminate Windows XP users as a potential market.

Much of the feature set for G80 can be taken advantage of through OpenGL on Windows XP today. Unfortunately, OpenGL has fallen out of use in games these days, but there are still a few who cling to its clean interface and extensibility. The ability to make use of DX10 class features is here today for those who wish to do so.

That's not to say that DX9 games won't see benefits from NVIDIA's new powerhouse. Everything we've tested here today shows incredible scaling on G80 and proves that a unified architecture is the way to go forward in graphics. More complex SM3.0 code will be capable of running on G80 faster than we've been able to see on G70 and R580, and we certainly hope developers will take advantage of that and start releasing games with the option to enable unheard of detail.

The bottom line is that we've got an excellent new GPU that enables incredible levels of performance and quality. And NVIDIA is able to do this while using a reasonable amount of power for the performance gained (despite requiring two PCIe power connectors per 8800 GTX). The chip is huge in terms of transistor count, and in terms of die area. Our estimates based on the wafer shots NVIDIA provided us with indicate that the 681 million transistor G80 die is somewhere between 480 and 530 mm^2 at 90nm. This leaves NVIDIA with the possibility of a spring refresh part based on TSMC's 80nm half-node process that could enable not only better prices, but higher performance and lower power as well.

While we weren't able to overclock the shader core of our G80 parts, NVIDIA has stated that shader core overclocking is coming. While playing around with the new nTune, overclocking the core clock does impact performance, but we'll talk more about this in our retail product review to be posted in the coming days.

With G80, NVIDIA is solidly in a leadership position and now we play the waiting game for ATI's R600 to arrive. One thing is for sure, if you were thinking about building a high end gaming system this holiday season, you only need to consider one card.

Performance with AA Disabled
Comments Locked

111 Comments

View All Comments

  • Sharky974 - Thursday, November 9, 2006 - link

    The new features of DX10 stuff was captivating at first, but quickly grew tiresome and needlessly complex. The IQ comparisons the same thing, some simplicity is needed here. Tell us in a nutshell what looks better and why. The mouse over pictures are well nigh useless as well, and all look like crap. Whatever needs to be changed to get the IQ point across, needs to be changed already, I'm guessing 200 zoom is a problem for starters.

    Then who's bright idea was it to only test one resolution, through the whole article?

    Then who's bright idea was it to dedicate just as many graphs as performance, one per game, to not only power draw, but the even more useless performance per watt? Meaning 66% of your data graphs, in an article about a paradign changin, long-anticipated, brand new GPU, are related to the power usage of the card. Are you electric workers monthly.com now?

    I am very surprised more of the comments weren't negative, this review was a total failure.

    And yeah, what's with all the non-standard resolution testing? All the big sites like H, Anand, and FS go round and round talking about the incredible depths they go to get the bottom of real world performance as it relates to the real world, average user, and then you guys use stupid resolution likes 1280X960 (FS uses that particular one), that nobody on earth uses, regularly! It's really, really stupid. Hell for that matter, nobody uses 1600X1200 or any non-LCD native res anymore either, yet those are all staples of any review, and so these "real world" articles aren't very real world at all. But that's somewhat of a tangent issue, and I actually dont mind a lot of different resolutions tested, just as long as the big common ones are hit (which is not always the case)
  • DerekWilson - Friday, November 10, 2006 - link

    I'm always working on bringing down the complexity of my explainations. It's one of my weak points as a writer. It's difficult for me to take something and present it at a high level that doesn't reflect exactly what the thing is. Analogies are great -- I like them -- but I have a hard time using them because I can't ever think of analogies that are accurate enough.

    Any suggestions you have for helping me explain things completely, accurately, effectively, and (especially) in the most straight forward manner possible are very welcome.

    As for the IQ comparisons -- these were much more simplified than I had intended (because Anand told me we couldn't do rollovers with 40 images on one page -- it would load too slow). This is our version of putting things in a nutshell. I could get to the point faster though --

    IQ:

    gamma correct aa is great for edges, but it causes problems with thin lines and transparency/adaptive AA making textures look mushy. transparency/adaptive aa are great but have a large performance hit -- except in 8800 which keeps these features playable and offers higher IQ. CSAA is great at brining higher AA levels to edges, but the loss of Z data at the sub-pixel level makes it less effective at solving the thin line problem than equivalent MSAA modes. The roll overs illustrate all this.

    Thats as simple as I can make it -- I hope it helps.

    We did not only test at one resolution -- In every game we tested at 1600x1200, 1920x1440, and 2560x1600. In oblivion we tested at 1280x1024 as well.

    All our resolution data was in the last graph on each page -- resolution scaling. There are two graphs per page on performance. As you can see, at resolutions below 2560x1600, the 8800 GTX is almost over kill.

    1600x1200 is a standard LCD panel resolution and has been for quite some time. It's actually quite affordable now as well. 1280x1024 (while popular) is often too low to matter in a high end performance analysis piece (and where it did matter we tested it). 1920x1440 is a 4:3 resolution that will give 1920x1200 panel owners a very good idea of performance (differnce is usually under 5% in many games). 2560x1600 is a standard resolution for 30" LCD panels.

    I can understand being upset if you missed the performance data at other resolutions, but it seems like the rest of your complaints are that we put too much data in the article. I doubt this will change in the future, but is there anything else we could have done to make this article better? We are very willing to listen to feedback, especially on articles as big as this.

    Thanks,
    Derek Wilson
  • flexy - Friday, November 10, 2006 - link

    >>>
    complaints are that we put too much data in the article. I doubt this will change in the future,
    >>>

    i doubt you can make it RIGHT for everyone...however i share the opinion w/ MOST that it is an excellent review. TOO much data is seldom bad, NOT on a site where you can expect geeks and nerds digging every bit of information :)

    I remember times when reviews where FAR less detailed...and what can be better than going in-depth into AA/AF modi, showing their differnce in detail ? I think this was right on and i value such in-depth coverage !

    The DX10 coverage MAYBE was "too much info" for some...but then legitimate IMHO. We're talking about totally new h/w architecture, totally new and revamped DX API and the first hardware supporting it..so it was defintly a good place to cover this.

    Also...you always have the option to skip parts of a review...and the MORE detailed it is...the more it is a helpful resource (also later) to come back and read up. You dont need to comprehend any bit of information at once, but it's good to know it's there.

    my $0.2
  • jiulemoigt - Thursday, November 9, 2006 - link

    The first really big issue is that a poly can have more than one color on it, due textures, subsurface scattering, displacements, bump maps, normal maps, occulion passes, specular highlight, transparency, and a few others I can not think of off the top of my head, you could probaly find out just by asking in any cg forum like cgtalk or any dev who has worked with a profesional 3d package. That being said it may have confused people to try and explain how it really works.
    The other issue is to deal with gamma correct AA, maybe my moniter is showing a way different image but I'm not really sure how you can even compare
    http://images.anandtech.com/reviews/video/NVIDIA/G...">http://images.anandtech.com/reviews/video/NVIDIA/G...
    http://images.anandtech.com/reviews/video/NVIDIA/G...">http://images.anandtech.com/reviews/video/NVIDIA/G...
    as the light is highlighting the building from two different direction in the images, the nvidia image is coming from the left and behind the buildings and the ati image is coming from the right and about midway down the image in front of the little building,
    though a question that should be asked what time of day is it supposed to be the nvidia looks like dusk, and the ati looks blown out even for high noon, though the one above seems to be the same time of day and the nvidia is blown and the ati is shadowing correctly... really odd for the images, which suggests that some other filter is causing the issue on both cards like hdr, or something else.
  • DerekWilson - Thursday, November 9, 2006 - link

    Yes a poly can have more than one color on it, and I agree our explaination could have been better ... but it is a difficult topic to talk about.

    The whole basis of multisample AA relies on the assumption that the color of a poly *within one pixel* will not vary significantly. Of course, this is not always true. This is, in fact, the reason supersample AA does make a difference -- it takes into account the actual color of the pixel at the position of the sub-pixel. This is also why its so much more expensive.

    I didn't mean to imply that an entire poly must have only one color. But it's hard to talk about MSAA without pointing out the fact that the algorithm assumes one color per pixel per poly (calculated at the pixel center in most cases).

    We did enable HDR, but we tried our hardest to take the screenshots at exactly the same ammount of time after loading the scene (Valve's HDR uses dynamic exposure which does change saturation over time and with light level coming into the camera).

    While this would impact general image comparison, it doesn't impact the effect of gamma correct AA on thin lines (which is what we were trying to show).

    Thanks for the feedback -- if there's anything you can add to help us be more specific in our description, we would certainly appreciate it. We would like to avoid simply leaving details out -- we'd like to learn how to better impart knowledge.
  • Nimbo - Thursday, November 9, 2006 - link

    This must be the first GPU article that does not derive in a flame war between ATI and Nvidia fanboys...
  • flexy - Thursday, November 9, 2006 - link

    i actually dont care. I look at performance and comparisons, and then chose what card to get :) Although w/ ATI for years already.

    If one card, however, has some substantial advantage over another, i'll gladly point that out and also gladly debate with others why i'd prefer card X over Y.

    Thats the difference between a fanboy and a enthusiast, i think. As long as i can back up statements w/ facts instead of just defeinding a "brand".

    the other "problem" is really that same gen cards USUALLY are pretty much on par prformance wise...so debating/defeninf brand X over Y does make as much sense as defending ferrari over lamborghini :)

    But then..if we wouldn't do that and even discuss about the "littlest" details and have lengthy conversations on forums eg. WHICH AA methods is better and why...and why 5 FPS there are better...and/or why this AF method is better than the other...it would be pretty boring.

    I mean we're hardware-enthusiasts, and gfx-cards are (IMHO) the most interesting component in a PC :)
  • DigitalFreak - Thursday, November 9, 2006 - link

    I thought we were done with the days of >$499 single GPU cards after the 7900GTX launch. Guess not.
  • VooDooAddict - Thursday, November 9, 2006 - link

    Great article.

    Now I just need to figure out if a 8800GTX will fit in a mATX UltraFly Case.
  • Araemo - Thursday, November 9, 2006 - link

    Everyone is repeating microsoft's claim that dx10 will be Vista-only.

    the inq (I know, I know....) reported http://www.theinquirer.net/default.aspx?article=35...">here that there will be a directx '9.0L' for XP that supports the new rendering features of DirectX10, but without the new virtualization/driver model improvements.

Log in

Don't have an account? Sign up now