Extreme Performance GPUs

There are basically only three Extreme Performance GPUs currently available. Of course we have the GeForce 8800 GTS and GTX, but we also include the GeForce 7950 GX2 in this category. Besides the individual graphics cards, we will finally include all of the multi-GPU configurations that we feel are worth considering. We'll start this category by first taking a look at the various options available.

Extreme Performance GPUs
GPU Pixel
Shaders
Vertex
Shaders
ROPs Core
Speed
RAM
Speed
Memory
Interface
Price
X1900 GT CF 72 16 24 575 1200 256bit $352
X1950 Pro CF 72 16 24 575 1380 256bit $412
7900 GT SLI 48 16 32 450 1320 256bit $492
7950 GX2 48 16 32 500 1200 256bit $465
8800 GTS 96 96 20 500 1600 320bit $455
7950 GT SLI 48 16 32 550 1400 256bit $498
7900 GTO SLI 48 16 32 650 1320 256bit $620
7950 GX2 QSLI 96 32 64 500 1200 256bit $930
7900 GTX SLI 48 16 32 650 1600 256bit $860
X1900 XT CF 96 16 32 625 1450 256bit $770
X1950 XTX CF 96 16 32 650 2000 256bit $774
8800 GTX 128 128 24 575 1800 384bit $603
8800 GTS SLI 192 192 40 500 1600 320bit $910
8800 GTX SLI 256 256 48 575 1800 384bit $1206

The first four configurations in the above table are generally going to be slower than a single 7950 GX2, so with the possible exception of X1900 GT CrossFire we would avoid them. We would also take a pass on the X1900 GT CrossFire configuration and go with a single High-End GPU at that price point, because the overall difference in performance isn't going to be much. In terms of performance, the 7950 GX2 actually ends up being faster than the 8800 GTS, but if you haven't purchased a GX2 already there's no real reason to purchase one now. Throw a bit of overclocking at the 8800 GTS and you can easily close the performance gap (and then some), plus you still get DirectX 10 support and lower noise levels.


Given that most of the remaining configurations can't even match the overall performance of a single GeForce 8800 GTX - they might prove faster in a few titles, but on average they will be slower - there's really no reason to purchase anything less than a GeForce 8800 series card or two if you are after extreme performance. Keep in mind that a single 8800 GTX is able to run most games at 2560x1600 with 4x antialiasing at reasonable frame rates, so unless you have a 30" display you may not feel any need to purchase more than one 8800 GTX card. If you simply want the best of the best and money is no object, of course dual 8800 GTX cards in SLI can't be beat for insane performance. Just make sure the rest of your system is up to snuff.

Somewhat similar to ATI's use of more pixel shader units on the X1900 cards in order to improve performance relative to the X1800, NVIDIA packs a whopping 96 or 128 shaders into the G80 cores. Unlike previous GPU designs (other than the Xbox 360's and Xenos chip), the G80 shaders are "unified shaders" and are able to function as pixel, vertex, or geometry shaders as appropriate. (Geometry shaders are one of the new additions to DirectX 10.) Each individual shader on the G80 is going to be less powerful than an equivalent shader on the G70 core, but the flexibility along with the sheer number of shader units makes for an extremely powerful, forward thinking architecture.


It's probably not too surprising that the NVIDIA GeForce 8800 line gets our recommendation right now for those of you who are after maximum graphics performance. There are no other graphics cards that can come near the performance level offered by the 8800 GTX, and no multi-GPU solution can touch the 8800 GTX SLI. Before going out and spending $600 or more on a graphics setup, however, there are some other things we need to mention.

If you decide to go out and purchase a GeForce 8800 card, you are definitely living on the "bleeding-edge" of technology. As has been the case with most new graphics technology launches (DirectX 7, 8, and 9), the drivers and software really aren't fully mature at present. We have seen at least one game already where the current NVIDIA drivers do not function properly, and we have heard various reports of additional games that don't work properly/at all with the G80 cards. If you don't like being a beta tester, you should probably wait at least another month or two before purchasing any DirectX 10 hardware.

That said, some of you are probably wondering what NVIDIA's competition can bring to the table in the near future. Unfortunately, we don't have the answer to that question, and all we know is that AMD/ATI is planning on releasing their next-generation DirectX 10 capable R600 hardware sometime in Q1'07 - some sources say early Q1, so it might only be another month or two before we can provide answers. We would expect the R600 to be competitive with the G80, and it wouldn't be too surprising to see it take the lead in some benchmarks. It also wouldn't be surprising to see driver issues similar to what NVIDIA is currently experiencing. Caveat emptor (let the buyer beware)!

The simple fact of the matter is that no one that really knows what R600 can do is going to talk right now. You can wait to see what happens in the next few months, but of course faster products are always coming out. If you've got the money, though, a GeForce 8800 GTX (or two) should keep you gaming happily for the next couple of years (once the "beta" issues are solved).

Would we actually recommend purchasing a GeForce 8800 GTX right now? That all depends on how much time you spend gaming. If you've got a Core 2 Extreme processor (or a Core 2 Duo overclocked to a similar level), a 30" LCD, lots of memory and hard drive space, and a power supply capable of delivering 1.21 Gigawatts of power, by all means go nuts. Hopefully you love to play the latest and greatest games at maximum detail levels as well, or there's a good chance all of that raw performance potential is going untapped, and don't be surprised if you run into some problems during the next few months while the drivers are ironed out. For the majority of people, a single high-end graphics card is going to be sufficient, with potentially fewer headaches as well.

High-End GPUs Performance Overview
Comments Locked

51 Comments

View All Comments

  • Jodiuh - Wednesday, December 13, 2006 - link

    The FR bought release day from Fry's had a 39C transistor and hit 660/1000. The AR ordered online last week has a 40C transistor and hits 630/1000. It may not be quite as fast, but I'll be keeping the newer AR w/ the 40C transistor...comforts me at night. :D

  • Jodiuh - Thursday, December 14, 2006 - link

    Reply from EVGA!

    Jod,
    AR= Etail/Retail RoHS compliant
    FR= Frys Retail RoHS compliant

    All of our cards had the correct transistor value when shipped out.

    Regards,

  • munky - Wednesday, December 13, 2006 - link

    quote:

    ATI's X1800 line on the other hand is quite different from the X1900 parts, with the latter parts having far more pixel pipelines, although in terms of performance each pixel pipeline on an X1900 chip is going to be less powerful than an X1800 pixel pipeline

    Again, this is completely wrong. The major difference between the x1800 and x1900 cards is that the x1900's have 3 pixel shaders per "pipe", whereas the x1800's only have one. If anything, the x1900 pipes are more powerful.
  • evonitzer - Wednesday, December 13, 2006 - link

    Akin to my comment above, quads are the thing these days, so the 1900 series has 4 pixel shaders per pipe. And if you go back to the original article when the 1900 was released, you'll see that the whole architecture is closer to 4 x1600's than 3 x1800's, either of which would result in the 48 shaders that we see. I recommend you read the first few pages of the debut article, but I think we can agree that the shaders in the x1800 were probably more potent than the ones in the 1600, so the 1900 is probably a little wimpier per shader than the 1800. However, it has 3 times as many, so it's better.

    Also the comment was probably intended to dissuade people from assuming that the 1900 would be 3 times better than the 1800, and that there is a difference of architectures going on here.
  • JarredWalton - Wednesday, December 13, 2006 - link

    quote:

    Also the comment was probably intended to dissuade people from assuming that the 1900 would be 3 times better than the 1800, and that there is a difference of architectures going on here.


    Ding! That was a main point of talking about the changes in architecture. In the case of the X1650 XT, however, double the number of pixel shaders really does end up being almost twice as fast as the X1600 XT.

    I also added a note on the page talking about the G80 mentioning that they have apparently taken a similar route, using many more "less complex" shader units in order to provide better overall performance. I am quite sure that a single G80 pixel shader (which of course is a unified shader, but that's beside the point) is not anywhere near as powerful as a single G70 pixel shader. When you have 96/128 of them compared to 24, however, more definitely ends up being better. :-)
  • munky - Wednesday, December 13, 2006 - link

    quote:

    ATI needed a lot more pipelines in order to match the performance of the 7600 GT, indicating that each pipeline is less powerful than the GeForce 7 series pipelines, but they are also less complex


    The 7600gt is 12 pipes. The x1650xt is 8 pipes with 3 pixel shaders each. You may want to rethink the statement quoted above.
  • evonitzer - Wednesday, December 13, 2006 - link

    What he meant were "pixel shaders", which seem to be interchanged with pipelines quite often. If you look on the table you'll see that the x1650xt is listed as having 24 pixel pipelines, and the 7600gt has 12 pixel pipelines, when they should read shaders instead.

    Also quads seem to be the thing, so the 7600 gt probably has 3 quads of shaders, and the 1650 has twice that with 6 quads. Pixel shaders, to be more exact.
  • JarredWalton - Wednesday, December 13, 2006 - link

    I have changed references from "pixel pipelines" to "pixel shaders". While it may have been a slight error in semantics to call them pipelines before, the basic summary still stands. ATI needed more pixel shaders in order to keep up with the performance and video was offering, indicating that each pixel shader from ATI is less powerful (overall -- I'm sure there are instances where ATI performs much better). This goes for your comment about X1800 below as well.
  • Spoelie - Wednesday, December 13, 2006 - link

    why does nvidia always gets replaced to "and video" in your texts? here and in the article :)
  • JarredWalton - Wednesday, December 13, 2006 - link

    Speech recognition does odd things. I don't proof posts as well as I should. :)

Log in

Don't have an account? Sign up now