Crucial’s RealSSD C300 - The Perfect Test Candidate

The C300 is capable of pushing over 300MB/s in sequential reads. More than enough bandwidth to need 6Gbps SATA as well as expose limitations from PCIe 1.0 slots.

To test the C300 I’m using Highpoint’s RocketRAID 62X. This PCIe 2.0 x1 card has a Marvell 88SE9128 6Gbps controller on it.

What About P5x/H5x?

Unlike Intel’s X58, the P55 and H5x chipsets don’t have any PCIe 2.0 lanes. The LGA-1156 Core i7/5/3 processors have an on-die PCIe 2.0 controller with 16 lanes, but the actual chipset only has 8 PCIe 1.0 lanes. And as we’ve already established, a single PCIe 1.0 lane isn’t enough to feed a bandwidth hungry SSD on a 6Gbps SATA controller.

Gigabyte does the obvious thing and uses the PCIe 2.0 lanes coming off the CPU for USB 3 and 6Gbps SATA. This works perfectly if you are using integrated graphics. If you’re using discrete graphics, you have the option of giving it 8 lanes and have the remaining lanes used by USB 3/SATA 6Gbps. Most graphics cards are just fine running in x8 mode so it’s not too big of a loss. If you have two graphics cards installed however, Gigabyte’s P55 boards will switch to using the PCIe 1.0 lanes from the P55/H5x.

ASUS uses the same approach on its lower end P55 boards, but takes a different approach on its SLI/CF P55 boards. Enter the PLX PEX8608:

The PLX PEX8608 combines 4 PCIe x1 lanes and devotes their bandwidth to the Marvell 6Gbps controller. You lose some usable PCIe lanes from the PCH, but you get PCIe 2.0-like performance from the Marvell controller.

For most users, ASUS and Gigabyte’s varying approaches should deliver the same results. If you are running a multi-GPU setup), then ASUS’ approach makes more sense if you are planning on using a 6Gbps SATA drive. The downside is added cost and power consumed by the PLX chip (an extra ~1.5W).

The Primer: PCI Express 1.0 vs. 2.0 The Test Platforms
Comments Locked

57 Comments

View All Comments

  • iwodo - Thursday, March 25, 2010 - link

    HDD performance has never really been the centre of discussion. Since they are always slow anyway. But with SSD, it has finally show SATA controller makes a lot of different.

    So what can we expect in future SATA controller? Are there any more performance we can squeeze out.
  • KaarlisK - Thursday, March 25, 2010 - link

    Do the P55 boards allow plugging in a graphics card in one x16 slot, and an IO card in the other x16 slot?
    According to Intel chipset specs, only the server versions of the chipset should allow that.
  • CharonPDX - Thursday, March 25, 2010 - link

    You talk about combining four PCIe 1.0 lanes to get "PCIe 2.0-like performance".

    PCIe doesn't care what generation it is. It only cares about how much bandwidth.

    Four PCIe 1.0 lanes will provide DOUBLE the bandwidth of one PCIe 2.0 lane. (4x250=1000 each way, 1x500=500 each way.)

    The fact that ICH10 and the P/H55 PCHs have 6-8 PCIe 1.0 lanes is more than enough to dwarf the measly 2 PCIe 2.0 lanes the AMD chipset has. (6x250=1500 or 8x250=2000 are both greater than 2*500=1000.) Irregardless, all three chipsets only have 2 GB/s between those PCIe ports and the memory controller.

    Why Highpoint cheaped out and put a two-port SATA 6Gb/s controller on a one-lane PCIe card is beyond me. Even at PCIe 2.0, that's still woefully inadequate. That REALLY should be on a four-lane card. Nobody but an enthusiast is going to buy it right now, and more and more "mainstream" boards are coming with 4-lane PCIe slots.

    By the way, the 4-lane slot on the DX58SO is PCIe 2.0, per http://downloadmirror.intel.com/18128/eng/DX58SO_P...">http://downloadmirror.intel.com/18128/eng/DX58SO_P...

    The fact that you have dismal results on a "1.0 slot" has nothing to do with it being 1.0, and everything to do with available bandwidth. If you put the exact same chip on a PCIe 1.0 4-lane card, you would see identical performance (possibly better, if it's more than enough to saturate 500 MB/s) than your one-lane card in a PCIe 2.0 slot. (I would have liked to see performance numbers running that card on the AMD's PCIe 2.0 one-lane slot.)
  • Anand Lal Shimpi - Thursday, March 25, 2010 - link

    The problem is that all of the on-board controllers and the cheaper add-in cards are all PCIe 2.0 x1 cards.

    Intel's latest DX58SO BIOS lists what mode the PCIe slots are operating in and when I install the HighPoint card in the x4 it lists its operating mode as 2.5GT/s and not 5.0GT/s. The x16 slots are correctly listed as 5.0GT/s.

    Take care,
    Anand
  • qwertymac93 - Thursday, March 25, 2010 - link

    while the sb850 has 2 pci-e 2.0 lanes, the 890gx northbridge has 16 for graphics cards, and another 6 lanes for anything else(thats 24 in total, btw). the southbridge is connected to the northbridge with something similar to 4 pci-e 2.0 lanes, thus 2GB/s (16 gigaBITS/s). i have no idea why you think the "measly" two lanes coming off the southbridge mean anything about its sata performance, nor do i understand why you think the 6 lanes coming off of intels h55(being fed by a slow dmi link) are somehow better.

    P.S. i don't think "irregardless" is a word, its sorta like a self contained double-negative. "ir"= not or without, "regard" care or worth, "less" not or without. "irregardless"= not without care or worth.
  • CharonPDX - Thursday, March 25, 2010 - link

    Both the SB850 and the Intel chipsets have 2 GB/s links between the NB and SB (or CPU and SB, in the P/H55.)

    And you are correct, I was not referring at all to the SB850's onboard SATA controller; solely to its PCIe slots. Six lanes of PCIe 1.0 has more available bandwidth than two lanes of PCIe 2.0. This comes in to play when using an add-in card.

    (Yes, I know "irregardless" isn't a real word, it's just fun to use.)
  • CharonPDX - Thursday, March 25, 2010 - link

    P.S. Go get a Highpoint RocketRAID 640. It has the exact same SATA 6Gb chip as the card you used, but on a x4 connector (and with four SATA ports instead of two, and with RAID. But if you're only running one drive, it should be identical.) Run it in the PCIe 1.0 x4 slot on the P55 board. Compare that to the x4 slot on the 890GX board. I bet you'll see *ZERO* difference when running just one drive.

    In fact, I bet on the 890GX board, you'll see the exact same performance on the RR640 in the x4 slot as on the Rocket 600 in the x1 slot.
  • oggy - Thursday, March 25, 2010 - link

    I would be fun to see some dual C300 action :)
  • wiak - Friday, March 26, 2010 - link

    yes, on both AMD 6Gbps SB850, Marvell 6Bps on AMD and Intel ;)
  • Ramon Zarat - Thursday, March 25, 2010 - link

    Unfortunately, testing with 1 drive give us only 1/3 on the picture.

    To REALLY saturate the SATA3/PCIe bus, 2 drives in stripped RAID 0 should have been used.

    To REALLY saturate everything (SATA3/USB3/PCIe)AT THE SAME TIME, an external SATA3 to USB3 SSD cradle transferring to/from 2 SSD SATA3 drives in stripped RAID 0 should have been used.

    The only thing needed to get a complete and definitive picture to settle this question once and for all would have been 2 more SATA3 SSDs and a cradle...

    Excellent review, but incomplete in my view.

Log in

Don't have an account? Sign up now