PCI-Express Compliance: Does It Even Matter?

For a while now we’ve been under the impression that video card size and power consumption was ultimately capped by the PCI-Express specification. At present time the specification and its addendums specify normal (75W), 150W, 225W, and 300W PCIe card operation. In the case of 300W cards in particular this is achieved through 75W from the PCIe slot, 75W from a 6pin PCIe power connector, and 150W from an 8pin PCIe power connector. As the name implies, the PCIe specification also defines what the 6pin and 8pin power connectors are supposed to be capable of, which is where 75W and 150W come from respectively.

Altogether the biggest, most powerful card configuration in the PCIe specification allows for a 12.283” long, triple-wide card that consumes 300W. To date we’ve never seen a card exceed the physical specifications, but we’ve seen several cards exceed the electrical specifications. This includes cards such as the 5970 and some overclocking-oriented 5870s that were designed to handle more than 300W when overclocked, and even more exotic cards such as the Asus ARES 5870X2 that simply drew more than 300W from the get-go. We have yet to see a reference design from AMD/NVIDIA however that exceeds any part of the PCIe specification by default.

So it has been clear for some time now that cards can exceed the PCIe specifications without incurring the immediately wrath of an army of lawyers, but at the same time this doesn’t establish what the benefits or losses are of being or not being PCIe compliant. To have a reference design exceed the PCIe specifications is certainly a new mark for the GPU industry, so we decided to get right to the bottom of the matter and get an answer to the following question: does PCI-Express compliance matter?

To answer this question we went to two parties. The first of which was of course AMD, whose product is in question. AMD’s answer basically amounts to a polite deflection: it’s an ultra-enthusiast card that at default settings does not exceed the power available by the combination of the PCIe slot and PCIe power connectors. Furthermore, as they correctly note, the 6990 is not the first card to ship at over 300W, as the ARES and other cards were drawing more than 300W a year ago. It’s a polite answer that glosses over the fact that no, the 6990 isn’t technically PCIe compliant.

To get a second opinion on the matter we went straight to the source: The Peripheral Component Interconnect Special Interest Group (PCI-SIG), which is the industry group that defines the PCIe standard and runs the workshops that test for product compliance. The PCI-SIG’s member list is virtually everyone in the computing industry, including AMD, NVIDIA, and Intel, so everyone has some level of representation with the group.

So what does the PCI-SIG think about cards such as the 6990 which exceed the PCIe specification? In a nutshell, they don’t directly care. The group’s working philosophy is closer to approving cards that work than it is about strictly enforcing standards, so their direct interest in the matter is limited. The holy grail of the PCI-SIG is the PCI Express Integrators List, which lists all the motherboards and add-on cards that have passed compliance testing. The principal purpose of the list is to help OEMs and system integrators choose hardware, relying on the list and by extension PCI-SIG testing to confirm that the product meets the PCIe standards, so that they can be sure it will work in their systems.

The Integrators List is more or less exclusively OEM focused, which means it has little significance for niche products such as the 6990 which is split between end-user installation and highly customized OEM builds. The 6990 does not need to be on the list to be sold to its target market. Similarly the 5970 was never submitted/approved for listing, and we wouldn’t expect the 6990 to be submitted either.

It is worth noting however that while the PCI-SIG does have power specifications, they’re not a principal concern of the group and they want to avoid doing anything that would limit product innovation. While the 300W specification was laid out under the belief that a further specification would not be necessary, the PCI-SIG does not even test for power specification compliance under their current compliance testing procedures.  Conceivably the 6990 could be submitted and could pass the test, leading to it being labeled PCIe compliant. Of course it’s equally conceivable that the PCI-SIG could start doing power compliance testing if it became an issue…

At the end of the day as the PCI-SIG is a pro-compliance organization as opposed to being a standard-enforcement organization, there’s little to lose for AMD or their partners by not being compliant with the PCIe power specifications. By not having passed compliance testing the only “penalty” for AMD is that they cannot claim the 6990 is PCIe compliant; funny enough they can even use the PCIe logo (we’ve already seen a Sapphire 6990 box with it). So does PCIe compliance matter? For mainstream products PCIe compliance matters for the purposes of getting OEM sales; for everything else including niche products like the 6990, PCIe compliance does not matter.

Once Again The Card They Beg You To Overclock New Catalyst Control Center Features & The Test
Comments Locked

130 Comments

View All Comments

  • Spazweasel - Tuesday, March 8, 2011 - link

    I've always viewed single-card dual-GPU cards as more of a packaging stunt than a product.

    They invariably are clocked a little lower than the single-GPU cards they re based upon, and short of a liquid cooling system are extremely noisy (unavoidable when you have twice as much heat that has to be dissipated by the same sized cooler as the single-GPU card). They also tend to not be a bargain price-wise; compare a dual-GPU card versus two of the single-GPU cards with the same GPU.

    Personally, I would much rather have discrete GPUs and be able to cool them without the noise. I'll spend a little more for a full-sized case and a motherboard with the necessary layout (two slots between PCI-16x slots) rather than deal with the compromises of the extra-dense packaging. If someone else needs quad SLI or quad Crossfire, well, fine... to each their own. But if dual GPUs is the goal, I truly don't see any advantage of a dual-GPU card over dual single-GPU cards, and plenty of disadvantages.

    Like I said... more of a stunt than a product. Cool that it exists, but less useful than advertised except for extremely narrow niches.
  • mino - Tuesday, March 8, 2011 - link

    Even -2- years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and for -3.5- years the answer was “no.”

    Umm, you sure bout both those time values?
    :)

    Nice review, BTW.
  • MrSpadge - Tuesday, March 8, 2011 - link

    "With a 375W TDP the 6990 should consume less power than 2x200W 6950CF, but in practice the 6950CF setup consumes 21W less. Part of this comes down to the greater CPU load the 6990 can create by allowing for higher framerates, but this doesn’t completely explain the disparity."

    If it hasn't been mentioned before: guys, this is simple. The TDP for the HD6950 is just for the PowerTune limit. The "power draw under gaming" is specified at ~150 W, which is just what you'll find during gaming gaming tests.

    Furthermore Cayman is run at lower voltage (1.10 V) and clocks and with less units on HD6950, so it's only natural for 2 of these to consume less power than a HD6990. Summing it up one would expect 1.10^2/1.12^2 * 800/830 * 22/24 = 85,2% the power consumption of a Cayman on HD6990.

    MrS
  • mino - Tuesday, March 8, 2011 - link

    You shall not hit them so hard next time. :)

    Numbers tend to hurt one's ego badly if properly thrown.
  • geok1ng - Tuesday, March 8, 2011 - link

    The article points that the 6990 runs much closer to 6950CF than 6970CF.

    I assume that the author is talking about 2GB 6950, that can be shader unlocked, in a process much safer than flashing the card with a 6970 BIOS.

    It would be interesting to see CF numbers for unlocked 6950s.

    As it stands the 6990 is not a great product: it requires an expensive PSU, a big case full of fans, at price ponit higher than similar CF setups.

    Considering that there are ZERO enthuasiast mobos thah wont accept CF, the 6990 becomes a very hard sell.

    Even more troubling is the lack of a DL-DVI adapter in the bundle, scaring way 30" owners, precisely the group of buyers most interested in this video card.

    Why should a 30" step away from a 580m or SLI 580s, if the 6990 the same expensive PSU, the same BIG case full of fans and a DL-DVI adapter costs more than teh price gap to a SLI mobo?
  • Thanny - Tuesday, March 8, 2011 - link

    This card looks very much like the XFX 4GB 5970 card. The GPU position and cooling setup is identical.

    I'd be very interested to see a performance comparison with that card, which operates at 5870 clock speeds and has the same amount of graphics memory (which is not "frame buffer", for those who keep misusing that term).
  • JumpingJack - Wednesday, March 9, 2011 - link

    I (which is not "frame buffer", for those who keep misusing that term).


    :) Yep, I wished they would actually make it right.

    The frame buffer is the amount of memory to store the pixel and color depth info for a renderable frame of data, whereas graphics memory (or VRAM) is the total memory available for the card which consequently holds the frame buffer, command buffer, textures, etc etc. The frame buffer is just a small portion of the VRAM set aside and is the output target for the GPU. The frame buffer size is the same for every modern video card on the planet at fixed (same) resolution. I.e. a 1900x1200 res with 32 bit color depth has a frame buffer of ~9.2 MB (1900x1200x32 / 8), if double or tripled buffered, multiply by 2 or 3.

    Most every techno site misapplies the term "frame buffer", Anandtech, PCPer (big abuser), Techreport ... most everyone.
  • Hrel - Wednesday, March 9, 2011 - link

    Anyone wanting to play at resolutions above 1080p should just buy two GTX560's for 500 bucks. Why waste the extra 200? There's no such thing as future proofing at these levels.
  • wellortech - Wednesday, March 9, 2011 - link

    If the 560s are as noisy as the 570, I think I would rather try a pair of 6950s.
  • HangFire - Wednesday, March 9, 2011 - link

    And you can't even bring yourself to mention Linux (non) support?

    You do realize there are high end Linux workstation users, with CAD, custom software, and OpenCL development projects that need this information?

Log in

Don't have an account? Sign up now