The First Test: Sequential Read Speed

The C300 can break 300MB/s in sequential read performance so it’s the perfect test for 6Gbps SATA bandwidth.

Intel’s X58 is actually the best platform here, delivering over 340MB/s from the C300 itself. If anything, we’re bound by the Marvell controller or the C300 itself in this case. AMD’s 890GX follows next at 319MB/s. It’s faster than 3Gbps SATA for sure, but just not quite as fast as the Marvell controller on an Intel X58.

The most surprising is that using the Marvell controller on Intel’s P55 platform, even in a PCIe 2.0 x16 slot, only delivers 308MB/s of read bandwidth. The PCIe controller is on the CPU die and should theoretically be lower latency than anything the X58 can muster, but for whatever reason it actually delivers lower bandwidth than the off-die X58 PCIe controller. This is true regardless of whether we use Lynnfield or Clarkdale in the motherboard, or if we’re using a P55, H55 or H57 motherboard. All platform/CPU combinations result in performance right around 310MB/s - a good 30MB/s slower than the X58. Remember that this is Intel’s first on-die PCIe implementation. It’s possible that performance is lower in order to first ensure compatibility. We may see better performance out of Sandy Bridge in 2011.

Using any of the PCIe 1.0 slots delivers absolutely horrid performance. Thanks to encoding and bus overhead, the most we can get out of PCIe 1.0 slot is ~192MB/s with our setup. Intel’s X58 board has a PCIe 1.0 x4 that appears to give us better performance than any other 1.0 slot for some reason despite us only using 1 lane on it.

Using one of the x1 slots on a P55 motherboard limits us to a disappointing 163.8MB/s. In other words, there’s no benefit to even having a 6Gbps drive here. ASUS PLX implementation however fixes that right up - at 336.9MB/s it’s within earshot of Intel’s X58.

It’s also worth noting that you’re better off using your 6Gbps SSD on one of the native 3Gbps SATA ports rather than use a 6Gbps card in a PCIe 1.0 slot. Intel’s native SATA ports read at ~265MB/s - better than the Marvell controller on any PCIe 1.0 slot.

The Test Platforms Random Read Performance is Also Affected
Comments Locked

57 Comments

View All Comments

  • vol7ron - Thursday, March 25, 2010 - link

    It would be extremely nice to see any RAID tests, as I've been asking Anand for months.

    I think he said a full review is coming, of course he could have just been toying with my emotions.
  • nubie - Thursday, March 25, 2010 - link

    Is there any logical reason you couldn't run a video card with x15 or x14 links and send the other 1 or 2 off to the 6Gbps and USB 3.0 controllers?

    As far as I am concerned it should work (and I have a geforce 6200 modified to x1 with a dremel that has been in use for the last couple years).

    Maybe the drivers or video bios wouldn't like that kind of lane splitting on some cards.

    You can test this yourself quickly by applying some scotch tape over a few of the signal pairs on the end of the video card, you should be able to see if modern cards have any trouble linking at x9-x15 link widths.
  • nubie - Thursday, March 25, 2010 - link

    Not to mention, where are the x4 6Gbps cards?
  • wiak - Friday, March 26, 2010 - link

    the marvell chip is a pcie 2.0 x1 chip anyway so its limited to that speed regardless of interface to motherboard

    atleast this says so
    https://docs.google.com/viewer?url=http://www.marv...">https://docs.google.com/viewer?url=http..._control...

    same goes for USB 3.0 from NEC, its also a PCIe 2.0 x1 chip
  • JarredWalton - Thursday, March 25, 2010 - link

    Like many computer interfaces, PCIe is designed to work in powers of two. You could run x1, x2, x4, x8, or x16, but x3 or x5 aren't allowable configurations.
  • nubie - Thursday, March 25, 2010 - link

    OK, x12 is accounted for according to this:

    http://www.interfacebus.com/Design_Connector_PCI_E...">http://www.interfacebus.com/Design_Connector_PCI_E...

    [quote]PCI Express supports 1x [2.5Gbps], 2x, 4x, 8x, 12x, 16x, and 32x bus widths[/quote]

    I wonder about x14, as it should offer much greater bandwidth than x8.

    I suppose I could do some informal testing here and see what really works, or maybe do some internet research first because I don't exactly have a test bench.
  • mathew7 - Thursday, March 25, 2010 - link

    While 12x is good for 1 card, I wonder how feasible would 6x do for 2 gfx cards.
  • nubie - Thursday, March 25, 2010 - link

    Even AMD agrees to the x12 link width:

    http://www.amd.com/us-en/Processors/ComputingSolut...">http://www.amd.com/us-en/Processors/Com.../0,,30_2...

    Seems like it could be an acceptable compromise on some platforms.
  • JarredWalton - Thursday, March 25, 2010 - link

    x12 is the exception to the powers of 2, you're correct. I'm not sure it would really matter much; Anand's results show that even with plenty of extra bandwidth (i.e. in a PCIe 2.0 x16 slot), the SATA 6G connection doesn't always perform the same. It looks like BIOS tuning is at present more important than other aspects, provided of course that you're not an x1 PCIe 1.0.
  • iwodo - Thursday, March 25, 2010 - link

    Well we are speaking in terms of Gfx, with So GFX card work instead of 16x, work in 12x. Or even 10x. Thereby saving IO space,just wondering what are the status of PCI-E 3.0....

Log in

Don't have an account? Sign up now