It should come as no surprise to anyone that the gaming industry is quite capable of forcefully driving the need for innovation - often found in the form of faster processors, more powerful 3D graphics adapters, and improvements in data exchange protocols. One such specification, PCI Express (PCI-E), was prompted for development when system engineers and integrators acknowledged that the interconnect demands of emerging video, communications, and computing platforms far exceeded the capabilities of traditional parallel buses, such as PCI. Although PCI Express is really nothing more than a general purpose, scalable I/O signaling interconnect, it has quickly become the platform of choice for industry segments that desperately need the high-performance, low latency, scalable bandwidth that it consistently provides. Graphics card makers have been using PCI Express technology for more than a generation now and today's choice of PCI Express-enabled devices is becoming larger by the minute.

Unlike older parallel bus technologies such as PCI, PCI Express adopts a point-to-point serial interface. This architecture provides dedicated host to client bandwidth, meaning each installed device no longer must contend for shared bus resources. This also removes a lot of the signal integrity issues such as reflections and excessive signal jitter associated with longer, multi-drop buses. Cleaner signals mean tighter timing tolerances, reduced latencies, and faster, more efficient data transfers. To the gamer, of course, only one thing really matters: more frames per second and better image quality. While PCI-E doesn't directly provide for that relative to AGP, it has enabled some improvements along with the return of multi-card graphics solutions like SLI and CrossFire. For these reasons, it is no wonder that PCI Express is the interconnect of choice on modern motherboards.

The typical Intel chipset solution provides a host of PCI Express resources. Connections to the Memory Controller Hub (MCH) are usually reserved for devices that need direct, unfettered, low-latency access to the main system memory while those that are much less sensitive to data transfer latency effects connect to the I/O Controller Hub (ICH). This approach ensures that the correct priority is given to those components that need it the most (usually graphics controllers). When it comes to Intel chipsets, a large portion of the market segment distinction comes from the type and quantity of these available connections. Later we will look at some of these differences and discuss some of the performance implications associated with each.



In late 2006, PCI-SIG (Special Interest Group) released the 2.0 update to the PCI Express Base Specification to members for review and comment. Along with the introduction of a host of new features comes the most predominant change of all, an increase in the signaling rate to 5.0GT/s (double that of the PCI Express 1.x specification of 2.5GT/s). This increase effectively doubles the maximum theoretical bandwidth of PCI Express and creates the additional data throughput capabilities that tomorrow's demanding systems will need for peak performance.

Both ATI/AMD and NVIDIA have released their first generation of PCI Express 2.0 capable video cards. ATI has the complete Radeon HD 3000 series while NVIDIA offers the new 8800 GT as well as a 512MB version of the 8800 GTS (G92) built using 65nm node technology. Last month we took an in-depth look at these new NVIDIA cards - our testing, comments, and conclusion can be found here. We reviewed the ATI models a little earlier in November - the results are interesting indeed, especially when compared to NVIDIA's newest offerings. Take a moment to review these cards if you have not already and then come read about PCI Express 2.0, what it offers, what has changed, and what it means to you.

PCI Express Link Speeds and Bandwidth Capabilities
Comments Locked

21 Comments

View All Comments

  • PhotoPrint - Sunday, May 11, 2008 - link

    ????? ?? ????
    ????? ??????
    ????
    ?????
    ????? ??????
  • nubie - Monday, January 14, 2008 - link

    I would like to point out that since the link auto-negotiates you can plug x16 cards into x8, x4, x2, and x1. The problem of physical connection is easily solved. I have done this two ways, one by cutting the back out of the motherboard connector, (seen here: http://picasaweb.google.com/nubie07/PCIEX1">http://picasaweb.google.com/nubie07/PCIEX1 ), and also by cutting the connector off of the video card down to x1 (sorry, no pics of this online). I did this to get 3 cards and 6 monitors on a standard (non SLI) motherboard. You can also purchase stand-offs from x16 to x8-x1, or modify a x1-x1 standoff (or "wearout" adaptor) to allow the x16 card to plug in.

    The throughput was more than enough, depending on your video cards on-board ram it can even play newer games fine. The utter lack of multi-head display support in DirectX and most games is just mind-boggling. Tell me why PC games won't allow multi-player, while consoles do?
  • cheburashka - Monday, January 7, 2008 - link

    "and there is no obvious reason as to why 2x8 CrossFire on a P965 chipset should not work"
    It has a single LTSSM thus it can not be split into multiple ports.
  • cheburashka - Monday, January 7, 2008 - link

    "and there is no obvious reason as to why 2x8 CrossFire on a P965 chipset should not work"
    It only has a single LTSSM thus it can not be split into multiple ports.
  • fredsky - Monday, January 7, 2008 - link

    sorry guys to be not as enthusiast...
    http://www.fudzilla.com/index.php?option=com_conte...">http://www.fudzilla.com/index.php?optio...mp;task=...

    there ARE a lot a issues here, especially with RAID cards pluged into PCIe 2 slots. LSI, 3Ware, Areca and so on.

    anand can you make some tests ?
    I read that Gigabyte GA-X38-DQ6 is compatible with Areca at least.

    regards
    fredsky
  • decalpha - Monday, January 7, 2008 - link

    I am not sure who is the guilty party but my new and shiny 8800GT refuses to POST. And if you search the user forums it's clear that most of the problems are faced by socket 939 systems with nvidia chipset. In the end it's the user who suffers.
  • Comdrpopnfresh - Monday, January 7, 2008 - link

    Does PCI-E also increase the available current supplied to the card by the slot? Doubling seems to be a theme here... Maybe from 75-150 Watts? I skimmed, so I apologize if it was written or already mentioned...
  • kjboughton - Monday, January 7, 2008 - link

    Although we didn't discuss this in the article, I can certainly answer the question: no. The slot still supplies up to a maximum of 75W per the specification; however, the PCI Express Card Electromechanical interface spec will allow for an additional 150W power delivery via external power cables for a total of 225W. Anything above this number is technically out of specification.
  • AndyHui - Sunday, January 6, 2008 - link

    Didn't seem all that long ago when I wrote the first PCI Express article here on AT.... but looking back, that was 2003.

    Good article.... but I thought the official abbreviation was PCIe, not PCI-E?
  • saratoga - Saturday, January 5, 2008 - link

    "PCI Express 3.0 should further increase the PCI-E Multiplier to 80x, which will bring the base link frequency very near the maximum theoretical switching rate for copper (~10Gbps)."

    This will be quite a surprise to the Ethernet people who can do 100 Gbit/s over 2 ethernet twisted pairs on their prototype systems! 500% of the theoretical maximum for copper is pretty good.

    Theres no theoretical maximum for copper since in theory the SNR can be infinite, and thus you can keep coming up with better codes. Theres a practical limit, set by just how high you can get the SNR in a real circuit, but thats also unbelievably high. The real limit for a PC is how much power you're willing to commit to your increasingly complicated transmission system.

Log in

Don't have an account? Sign up now