Final Thoughts

As we have seen, PCI Express 2.0 is the next step in the quest for more bandwidth. This trend has been around since the beginning of time as each successive generation usually doubles the maximum bandwidth of the previous generation (or technology). This only makes sense as according to Moore's Law the number of transistors placed in an inexpensive integrated circuit approximately doubles every two years - improvements in interconnect technology necessary to support these advancement are the logical extension of this law. Looking for performance improvements in today's technology offerings with the next-generation interconnect technology (like PCI Express 2.0) is largely futile. The real benefit may come in the next round of video card releases - or perhaps Generation Next^2. Still, those that are quick to brush off PCI Express 2.0 as ineffective and unimportant should recall the past. The difference in performance experienced during the transition from AGP to PCI Express 1.x was not necessarily a revolution at the time, but there are benefits to using PCI-E instead of the aging PCI interface, and we may see new implementations of PCI-E technology that make the 2.0 revision more important.

Just about every interconnect technology is moving from high-pin count parallel interfaces to high-speed, low-pin count, serial, point-to-point interfaces. We have seen the incredible difference moving to a low-latency, high-bandwidth interfaces made in the transition from PATA to SATA as well as PCI/AGP to PCI Express (AGP was really just a kludged-together remake of PCI technology). Moreover, we will see it again in the near future when Intel leaves behind their antiquated FSB topology for QuickPath - something AMD did years ago with the release of their Opteron/Athlon 64 line featuring HyperTransport Technology. Removing the MCH altogether means moving the memory controller on die as well as relocating PCI Express resources to the CPU. The X38/X48 chipset will simultaneously be the first and last Intel MCH to make use of PCI Express 2.0 technology. (Unless the fabled P45 chipset, the 65nm die-shrink of P35 with additional improvements, see's the light of day.)

No doubt, those that run their systems with PCI Express 2.0 graphics cards installed will see an increase in MCH power consumption. Although we can't tell you exactly what share of the consumption figure is due to PCI Express circuit operation, what we can tell you is that it went up - those portions of the die that must run at the base link frequency of 5.0GHz have at least doubled from the days of PCI Express 1.x. What this means to the bottom line we don't know - perhaps this can help us explain why X38 seems to be so darn hot at times.

Maybe in the future we will see a mechanism that allows us to force PCI Express 1.x operation with PCI Express 2.0 capable graphics cards on some motherboards. (We do not recall testing many boards that had this option - allowing for PCI Express 1.1 or 1.0a.) This would be the only way for us to determine if the increased signaling rate makes a big a difference for today's products, but while we would hope it helps we remain skeptical at best. Purchasing a PCI Express 2.0 graphics cards at this time is future proofing more than anything else; we haven't noticed any performance differences when comparing PCI-E 1.x chipsets to PCI-E 2.0 chipsets with the newer GPUs, and we don't expect you to either. PCI-E 2.0 may prove more useful as a way to provide four x8 slots that each offer the same bandwidth as the older 1.x x16 slots. With Triple-SLI, Tri-Fire, and talk of GPU physics accelerators, including additional high-bandwidth slots on motherboards makes sense.

Intel Chipset PCI Express Resource Assignments
Comments Locked

21 Comments

View All Comments

  • PhotoPrint - Sunday, May 11, 2008 - link

    ????? ?? ????
    ????? ??????
    ????
    ?????
    ????? ??????
  • nubie - Monday, January 14, 2008 - link

    I would like to point out that since the link auto-negotiates you can plug x16 cards into x8, x4, x2, and x1. The problem of physical connection is easily solved. I have done this two ways, one by cutting the back out of the motherboard connector, (seen here: http://picasaweb.google.com/nubie07/PCIEX1">http://picasaweb.google.com/nubie07/PCIEX1 ), and also by cutting the connector off of the video card down to x1 (sorry, no pics of this online). I did this to get 3 cards and 6 monitors on a standard (non SLI) motherboard. You can also purchase stand-offs from x16 to x8-x1, or modify a x1-x1 standoff (or "wearout" adaptor) to allow the x16 card to plug in.

    The throughput was more than enough, depending on your video cards on-board ram it can even play newer games fine. The utter lack of multi-head display support in DirectX and most games is just mind-boggling. Tell me why PC games won't allow multi-player, while consoles do?
  • cheburashka - Monday, January 7, 2008 - link

    "and there is no obvious reason as to why 2x8 CrossFire on a P965 chipset should not work"
    It has a single LTSSM thus it can not be split into multiple ports.
  • cheburashka - Monday, January 7, 2008 - link

    "and there is no obvious reason as to why 2x8 CrossFire on a P965 chipset should not work"
    It only has a single LTSSM thus it can not be split into multiple ports.
  • fredsky - Monday, January 7, 2008 - link

    sorry guys to be not as enthusiast...
    http://www.fudzilla.com/index.php?option=com_conte...">http://www.fudzilla.com/index.php?optio...mp;task=...

    there ARE a lot a issues here, especially with RAID cards pluged into PCIe 2 slots. LSI, 3Ware, Areca and so on.

    anand can you make some tests ?
    I read that Gigabyte GA-X38-DQ6 is compatible with Areca at least.

    regards
    fredsky
  • decalpha - Monday, January 7, 2008 - link

    I am not sure who is the guilty party but my new and shiny 8800GT refuses to POST. And if you search the user forums it's clear that most of the problems are faced by socket 939 systems with nvidia chipset. In the end it's the user who suffers.
  • Comdrpopnfresh - Monday, January 7, 2008 - link

    Does PCI-E also increase the available current supplied to the card by the slot? Doubling seems to be a theme here... Maybe from 75-150 Watts? I skimmed, so I apologize if it was written or already mentioned...
  • kjboughton - Monday, January 7, 2008 - link

    Although we didn't discuss this in the article, I can certainly answer the question: no. The slot still supplies up to a maximum of 75W per the specification; however, the PCI Express Card Electromechanical interface spec will allow for an additional 150W power delivery via external power cables for a total of 225W. Anything above this number is technically out of specification.
  • AndyHui - Sunday, January 6, 2008 - link

    Didn't seem all that long ago when I wrote the first PCI Express article here on AT.... but looking back, that was 2003.

    Good article.... but I thought the official abbreviation was PCIe, not PCI-E?
  • saratoga - Saturday, January 5, 2008 - link

    "PCI Express 3.0 should further increase the PCI-E Multiplier to 80x, which will bring the base link frequency very near the maximum theoretical switching rate for copper (~10Gbps)."

    This will be quite a surprise to the Ethernet people who can do 100 Gbit/s over 2 ethernet twisted pairs on their prototype systems! 500% of the theoretical maximum for copper is pretty good.

    Theres no theoretical maximum for copper since in theory the SNR can be infinite, and thus you can keep coming up with better codes. Theres a practical limit, set by just how high you can get the SNR in a real circuit, but thats also unbelievably high. The real limit for a PC is how much power you're willing to commit to your increasingly complicated transmission system.

Log in

Don't have an account? Sign up now