Original Link: http://www.anandtech.com/show/2365

Here we are, a year after the launch of G80, and we are seeing what amounts to the first real "refresh" part. Normally, we see a new or revamped version of hardware about 6 months after its introduction, but this time NVIDIA introduced its latest architecture over a six month period instead. First we saw the high end hardware hit, then the low end parts emerged after resting on previous generation hardware to serve as the low end. We haven't seen a true midrange part come out over the past year, which has disappointed many.

Rather than actually create a midrange part based on G80, NVIDIA opted to tweak the core, shrink to a 65nm process, integrate the display engine, and come out with hardware that performed somewhere between the high end 8800 GTS and GTX (G92). While this, in itself, isn't remarkable, the fact that NVIDIA is pricing this card between $200 and $250 is. Essentially, we've been given a revised high end part at midrange prices. The resulting card, the 8800 GT, essentially cannibalizes a large chunk of NVIDIA's own DX10 class hardware lineup. Needless to say, it also further puts AMD's 2900 XT to shame.

We will certainly provide data to back up all these ridiculous claims (I actually think NVIDIA may have invented the question mark as well), but until then, let's check out what we are working with. We've got a lot to cover, so let's get right to it.

G92: Funky Naming for a G80 Derivative

If we expect the G9x to represent a new architecture supporting the GeForce 9 series, we would be wrong. In spite of the fact that part of the reason we were given for NVIDIA's move away from NVxx code naming was to bring code name and product name closer to parity (G7x is GeForce 7, G8x is GeForce 8), it seems NVIDIA has broken this rule rather early on. Code names are automatically generated, but how we only ended up with three different G8x parts before we hit G9x is certainly a mystery. One that NVIDIA didn't feel like enlightening us on, as it no doubt has to do with unannounced products.

While not a new architecture, the GPU behind the 8800 GT has certainly been massaged quite a bit from the G80. The G92 is fabbed on a 65nm process, and even though it has fewer SPs, less texturing power, and not as many ROPs as the G80, it's made up of more transistors (754M vs. 681M). This is partly due to the fact that G92 integrates the updated video processing engine (VP2), and the display engine that previously resided off chip. Now, all the display logic including TMDS hardware is integrated onto the GPU itself.

In addition to the new features, there have been some enhancements to the architecture that likely added a few million transistors here and there as well. While we were unable to get any really good details, we were told that lossless compression ratios were increased in order to enable better performance at higher resolutions over the lower bandwidth memory bus attached to the G92 on 8800 GT. We also know that the proportion of texture address units to texture filtering units has increased to a 1:1 ratio (similar to the 8600 GTS, but in a context where we can actually expect decent performance). This should also improve memory bandwidth usage and texturing power in general.

Because NVIDIA was touting the addition of hardware double precision IEEE 754 floating point on their workstation hardware coming sometime before the end of the year, we suspected that G92 might include this functionality. It seems, however, that the hardware behind that advancement has been pushed back for some reason. G92 does not support hardware double precision floating point. This is only really useful for workstation and GPU computing applications at the moment, but because NVIDIA design one GPU for both consumer and workstation applications, it will be interesting to see if they do anything at all with double precision on the desktop.

With every generation, we can expect buffers and on chip memory to be tweaked based on experience with the previous iteration of the hardware. This could also have resulted in additional transistors. But regardless of the reason, this GPU packs quite a number of features into a very small area. The integration of these features into one ASIC is possible economically because of the 65nm process: even though there are more transistors, the physical die takes up much less space than the G80.

The Card

The GeForce 8800 GT, whose heart is a G92 GPU, is quite a sleek card. The heatsink shroud covers the entire length of the card so that no capacitors are exposed. The card's thermal envelop is low enough, thanks to the 65nm G92, to require only a single slot cooling solution. Here's a look at the card itself:

The card makes use of two dual-link DVI outputs and a third output for analog HD and other applications. We see a single SLI connector on top of the card, and a single 6-pin PCIe power connector on the back of the card. NVIDIA reports the maximum dissipated power as 105W, which falls within the 150W power envelope provided by the combination of one PCIe power connector and the PCIe x16 slot itself.

The fact that this thing is 65nm has given rise to at least one vendor attempting to build an 8800 GT with a passive cooler. While the 8800 GT does use less power than other cards in its class, we will have to wait and see if passive cooling will remain stable even through the most rigorous tests we can put it through.

Earlier this summer we reviewed NVIDIA's VP2 hardware in the form of the 8600 GTS. The 8800 GTX and GTS both lacked the faster video decode hardware of the lower end 8 Series hardware, but the 8800 GT changes all that. We now have a very fast GPU that includes full H.246 offload capability. Most of the VC-1 pipeline is also offloaded by the GPU, but the entropy encoding used in VC-1 is not hardware accelerated by NVIDIA hardware. This is less important in VC-1, as the decode process is much less strenuous. To recap the pipeline, here is a comparison of different video decode hardware:

NVIDIA's VP2 hardware matches the bottom line for H.264, and the line above for VC-1 and MPEG-2. This includes the 8800 GT.

We aren't including any new tests here, as we can expect performance on the same level as the 8600 GTS. This means a score of 100 under HD HQV, and very low CPU utilization even on lower end dual core processors.

Let's take a look at how this card stacks up against the rest of the lineup:

Form Factor 8800 GTX 8800 GTS 8800 GT 8600 GTS
Stream Processors 128 96 112 32
Texture Address / Filtering 32 / 64 24 / 48 56 / 56 16 / 16
ROPs 24 20 16 8
Core Clock 575MHz 500MHz 600MHz 675MHz
Shader Clock 1.35GHz 1.2GHz 1.5GHz 1.45GHz
Memory Clock 1.8GHz 1.6GHz 1.8GHz


Memory Bus Width 384-bit 320-bit 256-bit 128-bit
Frame Buffer 768MB 640MB / 320MB 512MB / 256MB 256MB
Transistor Count 681M 681M 754M 289M
Manufacturing Process TSMC 90nm TSMC 90nm TSMC 65nm TSMC 80nm
Price Point $500 - $600 $270 - $450 $199 - $249 $140 - $199

On paper, the 8800 GT completely gets rid of the point of the 8800 GTS. The 8800 GT has more shader processing power, can address and filter more textures per clock, and only falls short in the number of pixels it can write out to memory per clock and overall memory bandwidth. Even then, the memory bandwidth advantage of the 8800 GTS isn't that great (64GB/s vs. 57.6GB/s), amounting to only 11% thanks to the 8800 GT's slightly higher memory clock. If the 8800 GT does end up performing the same, if not better, than the 8800 GTS then NVIDIA will have truly thrown down an amazing hand.

You see, the GeForce 8800 GTS 640MB was an incredible performer upon its release, but it was still priced too high for the mainstream. NVIDIA turned up the heat with a 320MB version, which you'll remember performed virtually identically to the 640MB while bringing the price down to $300. With the 320MB GTS, NVIDIA gave us the performance of its $400 card for $300, and now with the 8800 GT, NVIDIA looks like it's going to give us that same performance for $200. And all this without a significant threat from AMD.

Before we get too far ahead of ourselves, we'll need to see how the 8800 GT and 8800 GTS 320MB really do stack up. On paper the decision is clear, but we need some numbers to be sure. And we can't get to the numbers until we cover a couple more bases The only other physical point of interest about the 8800 GT is the fact that it takes advantage of the PCIe 2.0 specification. Let's take a look at what that really means right now.

The First PCIe 2.0 Graphics Card

NVIDIA's 8800 GT is the "world's first consumer GPU to support PCI Express 2.0." Although AMD's Radeon HD 2400/2600 have PCIe 2.0 bandwidth, they don't implement the full spec, leaving the 8800 GT technically the first full PCIe 2.0 GPU. Currently, the only motherboard chipset out that that could take advantage of this is Intel's X38. We have yet to play with benchmarks on PCIe 2.0, but we don't expect any significant impact on current games and consumer applications. Currently we aren't bandwidth limited by PCIe 1.1 with its 4GB/sec in each direction, so it's unlikely that the speed boost would really help. This sentiment is confirmed by game developers and NVIDIA, but if any of our internal tests show anything different we'll certainly put a follow-up together.

PCIe 2.0 itself offers double the speed of the original spec. This means pairing a x16 PCIe 2.0 GPU with a x16 electrical PCIe 2.0 slot on a motherboard will offer 8GB/sec of bandwidth upstream and downstream (16GB/sec total bandwidth). This actually brings us to an inflection point in the industry: the CPU now has a faster connection to the GPU than to main system memory (compared to 800MHz DDR2). When we move to 1066MHz and 1333MHz DDR3, system memory will be faster, but for now most people will still be using 800MHz memory even with PCIe 2.0. PCIe 3.0 promises to double the bandwidth again from version 2.0, which would likely put a graphics card ahead of memory in terms of potential CPU I/O speed again. This will still be limited by the read and write speed of the graphics card itself, which has traditionally left a lot to be desired. Hopefully GPU makers will catch up with this and offer faster GPU memory read speeds as well.

For now, the only key point is that the card supports PCIe 2.0, and moving forward in bandwidth before we need it is a terrific step in enabling developers by giving them the potential to make use of a feature before there is an immediate need. This is certainly a good thing, as massively parallel processing, multiGPU, physics on the graphics card and other GPU computing techniques and technologies threaten to become mainstream. While we may not see applications that push PCIe 2.0 in the near term, moving over to the new spec is an important step, and we're glad to see it happening at this pace. But there are no real tangible benefits to the consumer right now either.

The transition to PCIe 2.0 won't be anything like the move from AGP to PCIe. The cards and motherboards are backwards and forwards compatible. PCIe 1.0 and 1.1 compliant cards can be plugged into a PCIe 2.0 motherboard, and PCIe 2.0 cards can be plugged into older motherboards. This leaves us with zero impact on the consumer due to PCIe 2.0, in more ways than one.

$199 or $249?

For this launch, we have been given a $50 price range for 8800 GT. NVIDIA told us that there will be no $200 8800 GT parts available at launch, but they should come along after prices settle down a bit. Initially, we thought that the 256MB parts would be $200 and the 512MB parts $250. It turns out that we were mistaken.

Not only that, but we can expect the stock clocked 512MB 8800 GT to hit $200 at the low end. The 256MB part, which won't show up until the end of November, will hit prices below $200. Upon hearing Ujesh Desai, NVIDIA's General Manager of Desktop GPUs, explain this incredible projection, my internal monologue was somehow rerouted to my mouth and I happened to exclaim (with all too much enthusiasm) "you're crazy!" As an aside, we at AnandTech try very hard to maintain a high level of professionalism in all our dealings with industry players. Such a response is quite out of character for any of our editors. Regardless, I continued on to say that it seems NVIDIA has started taking notes from local commercials we all see about the deep discount auto dealers who are slashing prices on everything. Apparently I was the second person that day to react that way to the information.

Honestly, depending on how quickly the 512MB 8800 GT falls to $200, this launch could truly be revolutionary. As Jen-Hsun asked the crowd of journalists at NVIDIA's recent Editor's Day: "Do you remember the Ti-4200?" And we really could see a product to rival the impact of that one here today. But even at $250, the 8800 GT is an incredible buy, and if it takes until after the holiday season for prices to come down to $200, we won't be surprised. When the 256MB part hits the scene, we will certainly be interested in seeing where price and performance shake out, and whatever AMD has up its sleeves could also prove interesting and change the landscape as well. NVIDIA has been fairly accurate in giving us pricing we can expect to see on the street, and we really hope that trend continues.

Of course, since this is an NVIDIA GPU, we can also expect overclocked versions from almost every company building a card based on G92. These will definitely come with a price premium, but we are really hoping to see the price range eventually settle into a baseline of $200 with overclocked cards topping out at $250. But we will have to wait and see what happens, and even if the price never falls that much the 512MB 8800 GT is a very good value. There's no way to lose with this one.

The Test

For this test, we are using a high end CPU configured with 4GB of DDR2 in an NVIDIA 680i motherboard. While we are unable to make full use of the 4GB of RAM due to the fact that we're running 32-bit Vista, we will be switching to 64-bit within the next few months for graphics. Before we do so we'll have a final article on how performance stacks up between the 32-bit and 64-bit versions of Vista, as well as a final look at Windows XP performance.

Our test platform for this article is as follows:

Test Setup
CPU Intel Core 2 Extreme X6800
Motherboard NVIDIA 680i SLI
Video Cards AMD Radeon HD 2900 XT
AMD Radeon X1950 XTX
NVIDIA GeForce 8800 GTX
NVIDIA GeForce 8800 GTS 320MB
NVIDIA GeForce 8800 GT
NVIDIA GeForce 8600 GTS
NVIDIA GeForce 7950 GT
Video Drivers AMD: Catalyst 7.10
NVIDIA: 169.01
Hard Drive Seagate 7200.9 300GB 8MB 7200RPM
RAM 4x1GB Corsair XMS2 PC2-6400 4-4-4-12
Operating System Windows Vista Ultimate 32-bit

We've made use of a good handful of the latest games, but our format for this article is more focused on breaking out specific comparisons than the usual GPU review. We will be individually pitting the 8800 GT against the 8800 GTX, the 8800 GTS, the 8600 GTS, and the 2900 XT. We'll also take a second to look at how the 8800 GT compares against previous generation hardware. First up is our comparison with the 8800 GTS.

Line Substitution: 8800 GT vs. 8800 GTS

As we've mentioned, on paper the 8800 GT looks much better in every area except for memory bandwidth. Even though memory size is an advantage for the 640MB card, we know from experience that the added memory size really doesn't net us much in the way of performance except in the most extreme circumstances. So we certainly expect the 8800 GT to outperform both the more expensive 8800 GTS 320MB and (by extension) the 8800 GTS 640MB. Essentially, this should give us the performance of a $400 card for $200 - $250. Quite a good deal no matter how you slice it.

And from the first, our expectations are upheld and then some. The 8800 GT does top the GTS. This seals the deal: the 8800 GTS is no longer a viable product. While NVIDIA has stated that the 320MB part will be dropped very quickly, they expect the 640MB card to stick around for a few months. We don't see this happening unless retailers start selling the 640MB part for $250 as well. While such a drastic price reduction is possible, it isn't very likely. Once people hear about the performance difference of the 8800 GT over the 8800 GTS, there may be a lot of $400+ 8800 GTS inventory sitting on shelves.

NVIDIA Demolishes... NVIDIA? 8800 GT vs. 8600 GTS

This is almost a silly comparison, but it's one we must do in order to illustrate a very valid point: the GeForce 8800 GT all but destroys any reason to purchase the 8600 GTS. The price difference between the 8600 GTS and the 8800 GT can be as little as $50, and as you're about to see, the performance difference more than justifies the price.

The specs alone should give you an indication of the thrashing that the 8600 GTS is about to receive: 112 SPs vs. 32 on the 8600 GTS, 3.5x the texture address and filtering power, and virtually twice the memory bandwidth; and all this for only $50 more?

One of two things needs to happen in order for the 8600 GTS to make sense in NVIDIA's lineup; it either needs to get a lot cheaper, or the 8800 GT needs to be closer to $250 in price. You know what our preference is, but at $200, the 8800 GT is the best card in NVIDIA's lineup.

Getting Cocky: 8800 GT vs. the GTX

We've already established that NVIDIA's new 8800 GT is better than the 8800 GTS, and we've just proved that the 8800 GT is a much better value than the 8600 GTS, but how does it fare against the current king of the hill - the GeForce 8800 GTX?

We would be out of our minds to expect the 8800 GT to even remotely compete with the GTX, but the real question is - how much more performance do you get from the extra money you spent on the GTX over the GT?

Because the 8800 GT does top the 8800 GTS, there is an even smaller gap between the 8800 GT and the GTX than between the existing $400 and $500 parts. With a smaller difference in performance than the GTS and about a $250 to $300 premium for the 8800 GTX beyond the part that falls one step down in performance, we really don't see much motivation to purchase the 8800 GTX.

Of course, there are exceptions to this, and people who own 30" displays and have deep pockets will still want the best of the best in their boxes, which means the 8800 GTX/Ultra SLI setup that will be necessary to get the most out of Crysis when the game finally makes its way on to shelves this fall. NVIDIA still owns the high end market, and while it's not for everyone, it is there for those who need it.

But back to the real story, in spite of the fact that the 8800 GT doesn't touch the GTX, two of them will certainly beat it for either equal or less money.

What must AMD do? 8800 GT vs. 2900 XT

Alright, here's where things get really interesting. AMD has yet to come out with its 8800 GT competitor, but we've heard some rumors here and there. First, our understanding is that the RV670 based AMD part will not be any faster than the 2900 XT (and will likely be at least a little bit slower). While we can't confirm this, as we haven't heard from AMD on the subject or received hardware to play with yet (in fact if we had, we wouldn't even be able to bring up our speculation). But if we are right, then it makes sense to compare the 8800 GT to the 2900 XT and see what happens.

Given the performance of the 8800 GT relative to the 8800 GTS, we can expect the 8800 GT to perform on par with, if not better than, the Radeon HD 2900 XT. Our numbers, confirm this for the most part. It's also worth noting that as resolution increases, the 2900 XT really closes the performance gap. This information is quite important. Either AMD needs to pull a rabbit out of the hat and surprise us with performance higher than we expect, or they need to compete with the 8800 GT based on price. We are hearing that the upcoming part from AMD should be competitive with 8800 GT pricing, but we'd need to see availability of the RV670 based parts at prices lower than the 8800 GT to make them start looking worth while. This could be difficult for AMD if NVIDIA hits their target of $200 (or lower for the 256MB version).

Again, none of the info we have on the upcoming AMD part is confirmed by AMD. We are simply speculating based on our best guess at their direction and rumors we have heard. Regardless of what AMD does or doesn't have in the works, it will be difficult for them to afford just another moderate showing. They must either clearly out perform or out price the 8800 GT to stay in the game this generation.

Out with the Old, In with the New: 8800 GT vs. 7950 GT and 1950 XT

Many gamers are likely still rocking either GeForce 7 or Radeon X1k based hardware. We understand that gamers don't have a continuous $250 fund in order to upgrade their graphics card whenever something new comes out. A good many of us have been waiting (and not so patiently) for a DX10 class graphics card in the $200 - $250 range. The 8800 GTS 320MB has been a great option for those who could afford it, but the 8600 GTS and 2600 XT really haven't delivered anything close to the kind of performance we wanted for the price.

We don't expect many people to "upgrade" to an 8800 GT from an 8800 GTS 320MB, we do expect those who spent at least $250+ on a previous generation DX9 class card to be interested in moving up to a current generation product. In order to paint a good picture of what gamers with older hardware can expect, we decided to pick only a couple reference points. While we could have tested everything out there, we felt that looking at the absolute fasted DX9 class card available (the Radeon X1950 XTX) and a card that offered good performance at between $250 and $300 (the GeForce 7950 GT) would give us a fairly complete picture of what to expect.

The reason this really makes sense, as we will show in a second, is that the 8800 GT absolutely blows away every DX9 class part out there. The only thing we really need to show is what kind of performance improvement you can expect depending on the type of hardware you own. If you own the best possible previous generation card, you get a very good performance improvement at most resolutions. If you own a previous generation card from the same price segment, you can expect a huge improvement in performance across the board. That said, feast your eyes on what everyone who hasn't upgraded yet can look forward to (in addition to all the added features of the GeForce 8 Series).

GeForce 8800 GT MultiGPU Scaling

It will still be quite a while before we see multiGPU solutions provide the stability and consistency of a single GPU. For those interested, though, NVIDIA has left enough of an opening in their product line up this time around that SLI actually makes sense as a high end solution rather than just a potential upgrade path.

With the elimination of the 8800 GTS lineup, and the fact that performance of the 8800 GT is much faster than half the speed of the 8800 GTX, 8800 GT SLI looks pretty good under games that scale with SLI. Not only that, but 2x 8800 GT cards will cost, at most, as much as an 8800 GTX. Once the price of the 8800 GT approaches $200, as we expect it to, the price of a solution faster than the 8800 GTX will be available for much less money.

We do have to keep in mind that not everything scales with SLI, and we still have the occasional minor problem with stability or consistency. The scaling issue can be eased through the use of SLIAA in games that are able to benefit.

Power Consumption

As this is NVIDIA's first 65nm part, it certainly of interest to see how it stacks up to the current line up in terms of power consumption. NVIDIA quotes the max power of the 8800 GT as 105W, but in the real world, we aren't just stressing the GPU. Let's take a look at system power draw under 3dmark 06 (specifically the pixel shader test).

Total System Power Consumption

Total System Power Consumption

The 8800 GT draws less power than anything that competes with it in terms of performance. When G80 hit last year, we made a big deal out of how power related to performance. This card simply blows everything else away in terms of how much little power is needed to attain incredible performance.

8800 GT SLI does draw more power than the 8800 GTX, but it also performs much better in cases where performance scales with SLI. For those who want high performance, power is generally less of an object, but it's good to know that 2x 8800 GT cards won't break the bank like a pair of 2900 XTs in CrossFire.

Final Words

It's really not often that we have the pleasure to review a product so impressively positioned. The 8800 GT is a terrific part, and it is hitting the street at a terrific price (provided NVIDIA's history of properly projecting street prices continues). The performance advantage and price utterly destroyed our perception of the GPU landscape. We liked the value of the 8800 GTS 320, and we were impressed when NVIDIA decided to go that route, providing such a high performance card for so little money. Upping the ante even more this time around really caught us off guard.

This launch really has the potential to introduce a card that could leave the same lasting impression on the computer industry that the Ti4200 left all those years ago. This kind of inflection point doesn't come along every year, or even every generation. But when architecture, process enhancements, and design decisions line up just right, the potential for a revolutionary product is high. Maybe our expectations were lowered due to the lack luster performance of the 8600 and 2600 series of cards, as well as the lack of true midrange cards priced between $200 and $250. Even without the sad state of the low end and lack of a midrange part, the 8800 GT is a great option.

What we expect going forward is for NVIDIA to fill in their now mostly devastated product line (we only count the 8400, 8500, 8800 GT, and 8800 GTX/Ultra as viable offerings from NVIDIA) with a new range of 65nm parts. As soon as their process is up to speed and validated by a strong run of G92 hardware, it will only be logical to move all (or most) other GPUs over. The production of smaller die sizes directly translates to monetary savings. There is a cost associated with moving a design over to a new process, but the 8800 GT could have been built with this in mind. It could be that 8800 GT is simply a way to ramp up 65nm production for the rest of the lineup. They could have hidden some extra transistors up there to enable them to simply turn on a higher end part when yields get high enough. Alternately, perhaps we could see another line of low end cards make their way out based on the 65nm process (because smaller die size adds up to reduced manufacturing cost).

Whatever the reason for the 8800 GT, we are glad of its existence. This truly is the part to beat in terms of value.

Log in

Don't have an account? Sign up now