Crucial’s RealSSD C300 - The Perfect Test Candidate

The C300 is capable of pushing over 300MB/s in sequential reads. More than enough bandwidth to need 6Gbps SATA as well as expose limitations from PCIe 1.0 slots.

To test the C300 I’m using Highpoint’s RocketRAID 62X. This PCIe 2.0 x1 card has a Marvell 88SE9128 6Gbps controller on it.

What About P5x/H5x?

Unlike Intel’s X58, the P55 and H5x chipsets don’t have any PCIe 2.0 lanes. The LGA-1156 Core i7/5/3 processors have an on-die PCIe 2.0 controller with 16 lanes, but the actual chipset only has 8 PCIe 1.0 lanes. And as we’ve already established, a single PCIe 1.0 lane isn’t enough to feed a bandwidth hungry SSD on a 6Gbps SATA controller.

Gigabyte does the obvious thing and uses the PCIe 2.0 lanes coming off the CPU for USB 3 and 6Gbps SATA. This works perfectly if you are using integrated graphics. If you’re using discrete graphics, you have the option of giving it 8 lanes and have the remaining lanes used by USB 3/SATA 6Gbps. Most graphics cards are just fine running in x8 mode so it’s not too big of a loss. If you have two graphics cards installed however, Gigabyte’s P55 boards will switch to using the PCIe 1.0 lanes from the P55/H5x.

ASUS uses the same approach on its lower end P55 boards, but takes a different approach on its SLI/CF P55 boards. Enter the PLX PEX8608:

The PLX PEX8608 combines 4 PCIe x1 lanes and devotes their bandwidth to the Marvell 6Gbps controller. You lose some usable PCIe lanes from the PCH, but you get PCIe 2.0-like performance from the Marvell controller.

For most users, ASUS and Gigabyte’s varying approaches should deliver the same results. If you are running a multi-GPU setup), then ASUS’ approach makes more sense if you are planning on using a 6Gbps SATA drive. The downside is added cost and power consumed by the PLX chip (an extra ~1.5W).

The Primer: PCI Express 1.0 vs. 2.0 The Test Platforms
Comments Locked

57 Comments

View All Comments

  • vol7ron - Thursday, March 25, 2010 - link

    It would be extremely nice to see any RAID tests, as I've been asking Anand for months.

    I think he said a full review is coming, of course he could have just been toying with my emotions.
  • nubie - Thursday, March 25, 2010 - link

    Is there any logical reason you couldn't run a video card with x15 or x14 links and send the other 1 or 2 off to the 6Gbps and USB 3.0 controllers?

    As far as I am concerned it should work (and I have a geforce 6200 modified to x1 with a dremel that has been in use for the last couple years).

    Maybe the drivers or video bios wouldn't like that kind of lane splitting on some cards.

    You can test this yourself quickly by applying some scotch tape over a few of the signal pairs on the end of the video card, you should be able to see if modern cards have any trouble linking at x9-x15 link widths.
  • nubie - Thursday, March 25, 2010 - link

    Not to mention, where are the x4 6Gbps cards?
  • wiak - Friday, March 26, 2010 - link

    the marvell chip is a pcie 2.0 x1 chip anyway so its limited to that speed regardless of interface to motherboard

    atleast this says so
    https://docs.google.com/viewer?url=http://www.marv...">https://docs.google.com/viewer?url=http..._control...

    same goes for USB 3.0 from NEC, its also a PCIe 2.0 x1 chip
  • JarredWalton - Thursday, March 25, 2010 - link

    Like many computer interfaces, PCIe is designed to work in powers of two. You could run x1, x2, x4, x8, or x16, but x3 or x5 aren't allowable configurations.
  • nubie - Thursday, March 25, 2010 - link

    OK, x12 is accounted for according to this:

    http://www.interfacebus.com/Design_Connector_PCI_E...">http://www.interfacebus.com/Design_Connector_PCI_E...

    [quote]PCI Express supports 1x [2.5Gbps], 2x, 4x, 8x, 12x, 16x, and 32x bus widths[/quote]

    I wonder about x14, as it should offer much greater bandwidth than x8.

    I suppose I could do some informal testing here and see what really works, or maybe do some internet research first because I don't exactly have a test bench.
  • mathew7 - Thursday, March 25, 2010 - link

    While 12x is good for 1 card, I wonder how feasible would 6x do for 2 gfx cards.
  • nubie - Thursday, March 25, 2010 - link

    Even AMD agrees to the x12 link width:

    http://www.amd.com/us-en/Processors/ComputingSolut...">http://www.amd.com/us-en/Processors/Com.../0,,30_2...

    Seems like it could be an acceptable compromise on some platforms.
  • JarredWalton - Thursday, March 25, 2010 - link

    x12 is the exception to the powers of 2, you're correct. I'm not sure it would really matter much; Anand's results show that even with plenty of extra bandwidth (i.e. in a PCIe 2.0 x16 slot), the SATA 6G connection doesn't always perform the same. It looks like BIOS tuning is at present more important than other aspects, provided of course that you're not an x1 PCIe 1.0.
  • iwodo - Thursday, March 25, 2010 - link

    Well we are speaking in terms of Gfx, with So GFX card work instead of 16x, work in 12x. Or even 10x. Thereby saving IO space,just wondering what are the status of PCI-E 3.0....

Log in

Don't have an account? Sign up now