The First PCIe 2.0 Graphics Card

NVIDIA's 8800 GT is the "world's first consumer GPU to support PCI Express 2.0." Although AMD's Radeon HD 2400/2600 have PCIe 2.0 bandwidth, they don't implement the full spec, leaving the 8800 GT technically the first full PCIe 2.0 GPU. Currently, the only motherboard chipset out that that could take advantage of this is Intel's X38. We have yet to play with benchmarks on PCIe 2.0, but we don't expect any significant impact on current games and consumer applications. Currently we aren't bandwidth limited by PCIe 1.1 with its 4GB/sec in each direction, so it's unlikely that the speed boost would really help. This sentiment is confirmed by game developers and NVIDIA, but if any of our internal tests show anything different we'll certainly put a follow-up together.

PCIe 2.0 itself offers double the speed of the original spec. This means pairing a x16 PCIe 2.0 GPU with a x16 electrical PCIe 2.0 slot on a motherboard will offer 8GB/sec of bandwidth upstream and downstream (16GB/sec total bandwidth). This actually brings us to an inflection point in the industry: the CPU now has a faster connection to the GPU than to main system memory (compared to 800MHz DDR2). When we move to 1066MHz and 1333MHz DDR3, system memory will be faster, but for now most people will still be using 800MHz memory even with PCIe 2.0. PCIe 3.0 promises to double the bandwidth again from version 2.0, which would likely put a graphics card ahead of memory in terms of potential CPU I/O speed again. This will still be limited by the read and write speed of the graphics card itself, which has traditionally left a lot to be desired. Hopefully GPU makers will catch up with this and offer faster GPU memory read speeds as well.

For now, the only key point is that the card supports PCIe 2.0, and moving forward in bandwidth before we need it is a terrific step in enabling developers by giving them the potential to make use of a feature before there is an immediate need. This is certainly a good thing, as massively parallel processing, multiGPU, physics on the graphics card and other GPU computing techniques and technologies threaten to become mainstream. While we may not see applications that push PCIe 2.0 in the near term, moving over to the new spec is an important step, and we're glad to see it happening at this pace. But there are no real tangible benefits to the consumer right now either.

The transition to PCIe 2.0 won't be anything like the move from AGP to PCIe. The cards and motherboards are backwards and forwards compatible. PCIe 1.0 and 1.1 compliant cards can be plugged into a PCIe 2.0 motherboard, and PCIe 2.0 cards can be plugged into older motherboards. This leaves us with zero impact on the consumer due to PCIe 2.0, in more ways than one.

The Card $199 or $249?
Comments Locked

90 Comments

View All Comments

  • AggressorPrime - Monday, October 29, 2007 - link

    I made a typo. Let us hope they are not on the same level.
  • ninjit - Monday, October 29, 2007 - link

    This page has my very confused:
    http://www.anandtech.com/video/showdoc.aspx?i=3140...">http://www.anandtech.com/video/showdoc.aspx?i=3140...

    The text of the article goes on as if the GT doesn't really compare to the GTX, except on price/performance:

    quote:

    We would be out of our minds to expect the 8800 GT to even remotely compete with the GTX, but the real question is - how much more performance do you get from the extra money you spent on the GTX over the GT?


    quote:

    But back to the real story, in spite of the fact that the 8800 GT doesn't touch the GTX, two of them will certainly beat it for either equal or less money.



    Yet all the graphs show the GT performing pretty much on par with the GTX, with at most a 5-10fps difference at the highest resolution.

    I didn't understand that last sentence I quoted above at all.
  • archcommus - Monday, October 29, 2007 - link

    This is obviously an amazing card and I hope it sets a new trend for getting good gaming performance in the latest titles for around $200 like it used to be, unlike the recent trend of having to spend $350+ for high end (not even ultra high end). However, I don't get why a GT part is higher performing than a GTS, isn't that going against their normal naming scheme a bit? I thought it was typically: Ultra -> GTX -> GTS -> GT -> GS, or something like that.
  • mac2j - Monday, October 29, 2007 - link

    I've been hearing rumors about an Nvidia 9800 card being released in the coming months .... is that the same card with an outdated/incorrect naming convention or a new architecture beyond G92?

    I guess if Nvidia had a next-gen architecture coming it would explain why they dont mind wiping some of their old products off the board with the 8800 GT which seems as though it will be a dominant part for the remaining lifetime of this generation of parts.
  • MFK - Monday, October 29, 2007 - link

    After lurking on Anandtech for two layout/design revisions, I have finally decided to post a comment. :D
    First of all hi all!

    Second of all, is it okay that nVidia decided not to introduce a proper next gen part in favour of this mid range offering? Okay so its good and what not, but what I'm wondering is, something that the article does not talk about, is what the future value of this card is. Can I expect this to play some upcoming games (Alan Wake?) on 1600 x 1200? I know its hard to predict, but industry analysts like you guys should have some idea. Also how long can I expect this card to continue playing games at acceptable framerates? Any idea, any one?
    Thanks.
  • DerekWilson - Monday, October 29, 2007 - link

    that's a tough call ....

    but really, it's up to the developers.

    UT3 looks great in DX9, and Bioshock looks great in DX10. Crysis looks amazing, but its a demo, not final code and it does run very slow.

    The bottom line is that developers need to balance the amazing effects they show off with playability -- it's up to them. They know what hardware you've got and they chose to push the envelope or not.

    I konw that's not an answer, sorry :-( ... it is just nearly impossible to say what will happen.
  • crimson117 - Monday, October 29, 2007 - link

    How much ram was on the 8800 GT used in testing? Was is 256 or 512?
  • NoBull6 - Monday, October 29, 2007 - link

    From context, I'm thinking 512. Since 512MB are the only cards available in the channel, and Derek was hypothesizing about the pricing of a 256MB version, I think you can be confident this was a 512MB test card.
  • DerekWilson - Monday, October 29, 2007 - link

    correct.

    256MB cards do not exist outside NVIDIA at this point.
  • ninjit - Monday, October 29, 2007 - link

    I was just wondering about that too.

    I thought I missed it in the article, but I didn't see it in another run through.

    I see I'm not the only one who was curious

Log in

Don't have an account? Sign up now