Introduction

Figuring out what sort of computer hardware to get for your next upgrade can be a tricky task, and if you are looking to get something for someone else it can be even more difficult. Some components have universal application, in that no matter what you will do with the computer they can improve performance. For most components, however, individual usage patterns will dictate how much benefit you will get from an upgrade. Processors, displays, and memory typically fall into the "universal upgrade" category; meanwhile hard drives, graphics cards, power supplies, cases, and other accessories may or may not help performance.

Today we will be taking a look at graphics card upgrades, so before we even get to the recommendations the first thing you need to ask yourself is whether or not you really need a faster graphics card. There are basically three areas that can benefit from having a better graphics card, with the fourth on the way in the near future. Starting with the future, you have Windows Vista, which will require a DirectX 9 capable graphics card at minimum in order to enable the Aero Glass user interface. Vista is scheduled to launch in the very near future, and we will take a closer look at the performance requirements in a separate article. Of the other three areas, one that we won't pay attention to here is the use of graphics cards for professional applications, simply because that is beyond the scope of this article. The remaining two areas of potential interest are video decoding/acceleration and computer gaming.

Video decoding support involves several things. First, you have performance oriented improvements - can the GPU reduce the CPU load during video playback? Second, you have the quality aspects - does the GPU make the resulting video output look better? Finally, video aficionados will definitely want to worry about HDCP support, not just for their graphics card but also for their display. We recently took a look at several of these areas in our HDCP Roundup and the HDCP H.264 decoding articles, while in the past we have looked at quality comparisons between NVIDIA's PureVideo and ATI's AVIVO. We will be taking a closer look at comparing the quality and performance of HDCP enabled graphics cards again in the near future, but for now we refer interested readers to the referenced articles.

We do need to insert one word of caution for people considering any new graphics card with the intention of using it for viewing HDCP content. If you have a display that requires a dual-link DVI connection, you're going to run into some problems. Basically, HDCP was architected to only support single-link connections, so you are going to be limited to viewing content at a maximum resolution of 1920x1080. What's worse is that as we understand it, HDCP is not supported over a dual-link connection at all, so if you have something like a 30" LCD and you want to view HDCP content, you will need to use a single-link cable. Welcome to the bleeding edge....

That leaves the final category and the one that the majority of people are most interested in: gaming performance. That is not to say that everyone worries about gaming performance, but rather that anyone who is seriously looking at a faster graphics card is likely to be doing so more for gaming than for anything else. If you don't play games, there is a very good chance that you don't need to worry about getting a faster graphics chip into your computer right now. End of story. Windows Vista and video decoding support might make a few more people look at graphics card upgrades, but for this Holiday Shopping Guide we will focus primarily on gaming performance.

As with our recent Holiday CPU Guide, we have quite a few price segments to cover, ranging from Ultra Budget GPUs through Extreme Performance GPUs. We also have to worry about multiple GPU combinations courtesy of CrossFire and SLI. With numerous overlapping products from both ATI and NVIDIA, it is important to remember that we will be classifying products based off of price rather than on performance, so in some cases we will have less expensive graphics cards that can outperform more expensive models. Finally, let's not forget that there are still a few AGP users hanging around, so we will mention those products were appropriate.

Integrated Graphics Solutions
Comments Locked

51 Comments

View All Comments

  • Jodiuh - Wednesday, December 13, 2006 - link

    The FR bought release day from Fry's had a 39C transistor and hit 660/1000. The AR ordered online last week has a 40C transistor and hits 630/1000. It may not be quite as fast, but I'll be keeping the newer AR w/ the 40C transistor...comforts me at night. :D

  • Jodiuh - Thursday, December 14, 2006 - link

    Reply from EVGA!

    Jod,
    AR= Etail/Retail RoHS compliant
    FR= Frys Retail RoHS compliant

    All of our cards had the correct transistor value when shipped out.

    Regards,

  • munky - Wednesday, December 13, 2006 - link

    quote:

    ATI's X1800 line on the other hand is quite different from the X1900 parts, with the latter parts having far more pixel pipelines, although in terms of performance each pixel pipeline on an X1900 chip is going to be less powerful than an X1800 pixel pipeline

    Again, this is completely wrong. The major difference between the x1800 and x1900 cards is that the x1900's have 3 pixel shaders per "pipe", whereas the x1800's only have one. If anything, the x1900 pipes are more powerful.
  • evonitzer - Wednesday, December 13, 2006 - link

    Akin to my comment above, quads are the thing these days, so the 1900 series has 4 pixel shaders per pipe. And if you go back to the original article when the 1900 was released, you'll see that the whole architecture is closer to 4 x1600's than 3 x1800's, either of which would result in the 48 shaders that we see. I recommend you read the first few pages of the debut article, but I think we can agree that the shaders in the x1800 were probably more potent than the ones in the 1600, so the 1900 is probably a little wimpier per shader than the 1800. However, it has 3 times as many, so it's better.

    Also the comment was probably intended to dissuade people from assuming that the 1900 would be 3 times better than the 1800, and that there is a difference of architectures going on here.
  • JarredWalton - Wednesday, December 13, 2006 - link

    quote:

    Also the comment was probably intended to dissuade people from assuming that the 1900 would be 3 times better than the 1800, and that there is a difference of architectures going on here.


    Ding! That was a main point of talking about the changes in architecture. In the case of the X1650 XT, however, double the number of pixel shaders really does end up being almost twice as fast as the X1600 XT.

    I also added a note on the page talking about the G80 mentioning that they have apparently taken a similar route, using many more "less complex" shader units in order to provide better overall performance. I am quite sure that a single G80 pixel shader (which of course is a unified shader, but that's beside the point) is not anywhere near as powerful as a single G70 pixel shader. When you have 96/128 of them compared to 24, however, more definitely ends up being better. :-)
  • munky - Wednesday, December 13, 2006 - link

    quote:

    ATI needed a lot more pipelines in order to match the performance of the 7600 GT, indicating that each pipeline is less powerful than the GeForce 7 series pipelines, but they are also less complex


    The 7600gt is 12 pipes. The x1650xt is 8 pipes with 3 pixel shaders each. You may want to rethink the statement quoted above.
  • evonitzer - Wednesday, December 13, 2006 - link

    What he meant were "pixel shaders", which seem to be interchanged with pipelines quite often. If you look on the table you'll see that the x1650xt is listed as having 24 pixel pipelines, and the 7600gt has 12 pixel pipelines, when they should read shaders instead.

    Also quads seem to be the thing, so the 7600 gt probably has 3 quads of shaders, and the 1650 has twice that with 6 quads. Pixel shaders, to be more exact.
  • JarredWalton - Wednesday, December 13, 2006 - link

    I have changed references from "pixel pipelines" to "pixel shaders". While it may have been a slight error in semantics to call them pipelines before, the basic summary still stands. ATI needed more pixel shaders in order to keep up with the performance and video was offering, indicating that each pixel shader from ATI is less powerful (overall -- I'm sure there are instances where ATI performs much better). This goes for your comment about X1800 below as well.
  • Spoelie - Wednesday, December 13, 2006 - link

    why does nvidia always gets replaced to "and video" in your texts? here and in the article :)
  • JarredWalton - Wednesday, December 13, 2006 - link

    Speech recognition does odd things. I don't proof posts as well as I should. :)

Log in

Don't have an account? Sign up now