The Card

The GeForce 8800 GT, whose heart is a G92 GPU, is quite a sleek card. The heatsink shroud covers the entire length of the card so that no capacitors are exposed. The card's thermal envelop is low enough, thanks to the 65nm G92, to require only a single slot cooling solution. Here's a look at the card itself:



The card makes use of two dual-link DVI outputs and a third output for analog HD and other applications. We see a single SLI connector on top of the card, and a single 6-pin PCIe power connector on the back of the card. NVIDIA reports the maximum dissipated power as 105W, which falls within the 150W power envelope provided by the combination of one PCIe power connector and the PCIe x16 slot itself.

The fact that this thing is 65nm has given rise to at least one vendor attempting to build an 8800 GT with a passive cooler. While the 8800 GT does use less power than other cards in its class, we will have to wait and see if passive cooling will remain stable even through the most rigorous tests we can put it through.

Earlier this summer we reviewed NVIDIA's VP2 hardware in the form of the 8600 GTS. The 8800 GTX and GTS both lacked the faster video decode hardware of the lower end 8 Series hardware, but the 8800 GT changes all that. We now have a very fast GPU that includes full H.246 offload capability. Most of the VC-1 pipeline is also offloaded by the GPU, but the entropy encoding used in VC-1 is not hardware accelerated by NVIDIA hardware. This is less important in VC-1, as the decode process is much less strenuous. To recap the pipeline, here is a comparison of different video decode hardware:



NVIDIA's VP2 hardware matches the bottom line for H.264, and the line above for VC-1 and MPEG-2. This includes the 8800 GT.

We aren't including any new tests here, as we can expect performance on the same level as the 8600 GTS. This means a score of 100 under HD HQV, and very low CPU utilization even on lower end dual core processors.

Let's take a look at how this card stacks up against the rest of the lineup:

Form Factor 8800 GTX 8800 GTS 8800 GT 8600 GTS
Stream Processors 128 96 112 32
Texture Address / Filtering 32 / 64 24 / 48 56 / 56 16 / 16
ROPs 24 20 16 8
Core Clock 575MHz 500MHz 600MHz 675MHz
Shader Clock 1.35GHz 1.2GHz 1.5GHz 1.45GHz
Memory Clock 1.8GHz 1.6GHz 1.8GHz

2.0GHz

Memory Bus Width 384-bit 320-bit 256-bit 128-bit
Frame Buffer 768MB 640MB / 320MB 512MB / 256MB 256MB
Transistor Count 681M 681M 754M 289M
Manufacturing Process TSMC 90nm TSMC 90nm TSMC 65nm TSMC 80nm
Price Point $500 - $600 $270 - $450 $199 - $249 $140 - $199


On paper, the 8800 GT completely gets rid of the point of the 8800 GTS. The 8800 GT has more shader processing power, can address and filter more textures per clock, and only falls short in the number of pixels it can write out to memory per clock and overall memory bandwidth. Even then, the memory bandwidth advantage of the 8800 GTS isn't that great (64GB/s vs. 57.6GB/s), amounting to only 11% thanks to the 8800 GT's slightly higher memory clock. If the 8800 GT does end up performing the same, if not better, than the 8800 GTS then NVIDIA will have truly thrown down an amazing hand.

You see, the GeForce 8800 GTS 640MB was an incredible performer upon its release, but it was still priced too high for the mainstream. NVIDIA turned up the heat with a 320MB version, which you'll remember performed virtually identically to the 640MB while bringing the price down to $300. With the 320MB GTS, NVIDIA gave us the performance of its $400 card for $300, and now with the 8800 GT, NVIDIA looks like it's going to give us that same performance for $200. And all this without a significant threat from AMD.

Before we get too far ahead of ourselves, we'll need to see how the 8800 GT and 8800 GTS 320MB really do stack up. On paper the decision is clear, but we need some numbers to be sure. And we can't get to the numbers until we cover a couple more bases The only other physical point of interest about the 8800 GT is the fact that it takes advantage of the PCIe 2.0 specification. Let's take a look at what that really means right now.

G92: Funky Naming for a G80 Derivative The First PCIe 2.0 Graphics Card
Comments Locked

90 Comments

View All Comments

  • AggressorPrime - Monday, October 29, 2007 - link

    I made a typo. Let us hope they are not on the same level.
  • ninjit - Monday, October 29, 2007 - link

    This page has my very confused:
    http://www.anandtech.com/video/showdoc.aspx?i=3140...">http://www.anandtech.com/video/showdoc.aspx?i=3140...

    The text of the article goes on as if the GT doesn't really compare to the GTX, except on price/performance:

    quote:

    We would be out of our minds to expect the 8800 GT to even remotely compete with the GTX, but the real question is - how much more performance do you get from the extra money you spent on the GTX over the GT?


    quote:

    But back to the real story, in spite of the fact that the 8800 GT doesn't touch the GTX, two of them will certainly beat it for either equal or less money.



    Yet all the graphs show the GT performing pretty much on par with the GTX, with at most a 5-10fps difference at the highest resolution.

    I didn't understand that last sentence I quoted above at all.
  • archcommus - Monday, October 29, 2007 - link

    This is obviously an amazing card and I hope it sets a new trend for getting good gaming performance in the latest titles for around $200 like it used to be, unlike the recent trend of having to spend $350+ for high end (not even ultra high end). However, I don't get why a GT part is higher performing than a GTS, isn't that going against their normal naming scheme a bit? I thought it was typically: Ultra -> GTX -> GTS -> GT -> GS, or something like that.
  • mac2j - Monday, October 29, 2007 - link

    I've been hearing rumors about an Nvidia 9800 card being released in the coming months .... is that the same card with an outdated/incorrect naming convention or a new architecture beyond G92?

    I guess if Nvidia had a next-gen architecture coming it would explain why they dont mind wiping some of their old products off the board with the 8800 GT which seems as though it will be a dominant part for the remaining lifetime of this generation of parts.
  • MFK - Monday, October 29, 2007 - link

    After lurking on Anandtech for two layout/design revisions, I have finally decided to post a comment. :D
    First of all hi all!

    Second of all, is it okay that nVidia decided not to introduce a proper next gen part in favour of this mid range offering? Okay so its good and what not, but what I'm wondering is, something that the article does not talk about, is what the future value of this card is. Can I expect this to play some upcoming games (Alan Wake?) on 1600 x 1200? I know its hard to predict, but industry analysts like you guys should have some idea. Also how long can I expect this card to continue playing games at acceptable framerates? Any idea, any one?
    Thanks.
  • DerekWilson - Monday, October 29, 2007 - link

    that's a tough call ....

    but really, it's up to the developers.

    UT3 looks great in DX9, and Bioshock looks great in DX10. Crysis looks amazing, but its a demo, not final code and it does run very slow.

    The bottom line is that developers need to balance the amazing effects they show off with playability -- it's up to them. They know what hardware you've got and they chose to push the envelope or not.

    I konw that's not an answer, sorry :-( ... it is just nearly impossible to say what will happen.
  • crimson117 - Monday, October 29, 2007 - link

    How much ram was on the 8800 GT used in testing? Was is 256 or 512?
  • NoBull6 - Monday, October 29, 2007 - link

    From context, I'm thinking 512. Since 512MB are the only cards available in the channel, and Derek was hypothesizing about the pricing of a 256MB version, I think you can be confident this was a 512MB test card.
  • DerekWilson - Monday, October 29, 2007 - link

    correct.

    256MB cards do not exist outside NVIDIA at this point.
  • ninjit - Monday, October 29, 2007 - link

    I was just wondering about that too.

    I thought I missed it in the article, but I didn't see it in another run through.

    I see I'm not the only one who was curious

Log in

Don't have an account? Sign up now