The First PCIe 2.0 Graphics Card

NVIDIA's 8800 GT is the "world's first consumer GPU to support PCI Express 2.0." Although AMD's Radeon HD 2400/2600 have PCIe 2.0 bandwidth, they don't implement the full spec, leaving the 8800 GT technically the first full PCIe 2.0 GPU. Currently, the only motherboard chipset out that that could take advantage of this is Intel's X38. We have yet to play with benchmarks on PCIe 2.0, but we don't expect any significant impact on current games and consumer applications. Currently we aren't bandwidth limited by PCIe 1.1 with its 4GB/sec in each direction, so it's unlikely that the speed boost would really help. This sentiment is confirmed by game developers and NVIDIA, but if any of our internal tests show anything different we'll certainly put a follow-up together.

PCIe 2.0 itself offers double the speed of the original spec. This means pairing a x16 PCIe 2.0 GPU with a x16 electrical PCIe 2.0 slot on a motherboard will offer 8GB/sec of bandwidth upstream and downstream (16GB/sec total bandwidth). This actually brings us to an inflection point in the industry: the CPU now has a faster connection to the GPU than to main system memory (compared to 800MHz DDR2). When we move to 1066MHz and 1333MHz DDR3, system memory will be faster, but for now most people will still be using 800MHz memory even with PCIe 2.0. PCIe 3.0 promises to double the bandwidth again from version 2.0, which would likely put a graphics card ahead of memory in terms of potential CPU I/O speed again. This will still be limited by the read and write speed of the graphics card itself, which has traditionally left a lot to be desired. Hopefully GPU makers will catch up with this and offer faster GPU memory read speeds as well.

For now, the only key point is that the card supports PCIe 2.0, and moving forward in bandwidth before we need it is a terrific step in enabling developers by giving them the potential to make use of a feature before there is an immediate need. This is certainly a good thing, as massively parallel processing, multiGPU, physics on the graphics card and other GPU computing techniques and technologies threaten to become mainstream. While we may not see applications that push PCIe 2.0 in the near term, moving over to the new spec is an important step, and we're glad to see it happening at this pace. But there are no real tangible benefits to the consumer right now either.

The transition to PCIe 2.0 won't be anything like the move from AGP to PCIe. The cards and motherboards are backwards and forwards compatible. PCIe 1.0 and 1.1 compliant cards can be plugged into a PCIe 2.0 motherboard, and PCIe 2.0 cards can be plugged into older motherboards. This leaves us with zero impact on the consumer due to PCIe 2.0, in more ways than one.

The Card $199 or $249?
POST A COMMENT

90 Comments

View All Comments

  • Spacecomber - Monday, October 29, 2007 - link

    Test Reply
  • EateryOfPiza - Monday, October 29, 2007 - link

    What kind of G92 variants can we expect by Christmas 07?

    Or Summer 08?
    Reply
  • mpc7488 - Monday, October 29, 2007 - link

    ardOCP is reporting that nVidia is increasing the 8800GTS stream processors to 112. Reply
  • Spacecomber - Monday, October 29, 2007 - link

    Testing ;-) Reply
  • Spacecomber - Monday, October 29, 2007 - link

    It appears that it was the bracketed h that was hiding all subsequent text. It needed a bracketed /h to close that "feature". Reply
  • mpc7488 - Monday, October 29, 2007 - link

    Haha - thanks. I guess if anyone wants the explanation of the stream processors they can highlight the 'hidden message'. Reply
  • mpc7488 - Monday, October 29, 2007 - link

    I'm not sure why the first post lost my text unless it was the bracket I used around the H - but HardOCP is reporting that nVidia is changing the 8800GTS 640 MB to have 112 stream processors. Reply
  • mpc7488 - Monday, October 29, 2007 - link

    Great article Derek - I think you can tell you're mildly excited about this product :)

    Is there a reason that you didn't do any tests with anti-aliasing? I would assume that this would show more deviation between the 8800GTX and the 8800GT?
    Reply
  • chizow - Monday, October 29, 2007 - link

    Nice job as usual Derek!

    Just wondering though, if you were able to test the cards at the same clock speeds. The GT by default has @100MHz advantage on the core over the GTS, which is a common reason the GTS falls so far behind in head to head testing. I expect the GT to have more OC'ing headroom than the GTS anyways, but it would be nice to see an apples to apples comparison to reveal the impact of some of the architecture changes from G80 to G92. Of note, the GT has fewer ROPs and a smaller memory bus but gains 1:1 address/filter units and 16 more stream processors.

    Also, I saw an early review that showed massive performance gains when the shader processor was overclocked on the GT; much bigger gains than significant increases to the core/memory clocks. Similar testing with the GTS/GTX don't yield anywhere near that much performance gain when the shader core clock is bumped up.

    Lastly, any idea when the G92 8800GTS refresh is going to be released? With a 640MB GTS this seems more of a lateral move to an 8800GT, although a refreshed GTS with 128SP and all the other enhancements of the G92 should undoubtedly be faster than the GTX...and maybe even the Ultra once overclocked.
    Reply
  • Hulk - Monday, October 29, 2007 - link

    I'm looking to build a HTPC and this would be a great card if it does video decoding? Reply

Log in

Don't have an account? Sign up now