The First Test: Sequential Read Speed

The C300 can break 300MB/s in sequential read performance so it’s the perfect test for 6Gbps SATA bandwidth.

Intel’s X58 is actually the best platform here, delivering over 340MB/s from the C300 itself. If anything, we’re bound by the Marvell controller or the C300 itself in this case. AMD’s 890GX follows next at 319MB/s. It’s faster than 3Gbps SATA for sure, but just not quite as fast as the Marvell controller on an Intel X58.

The most surprising is that using the Marvell controller on Intel’s P55 platform, even in a PCIe 2.0 x16 slot, only delivers 308MB/s of read bandwidth. The PCIe controller is on the CPU die and should theoretically be lower latency than anything the X58 can muster, but for whatever reason it actually delivers lower bandwidth than the off-die X58 PCIe controller. This is true regardless of whether we use Lynnfield or Clarkdale in the motherboard, or if we’re using a P55, H55 or H57 motherboard. All platform/CPU combinations result in performance right around 310MB/s - a good 30MB/s slower than the X58. Remember that this is Intel’s first on-die PCIe implementation. It’s possible that performance is lower in order to first ensure compatibility. We may see better performance out of Sandy Bridge in 2011.

Using any of the PCIe 1.0 slots delivers absolutely horrid performance. Thanks to encoding and bus overhead, the most we can get out of PCIe 1.0 slot is ~192MB/s with our setup. Intel’s X58 board has a PCIe 1.0 x4 that appears to give us better performance than any other 1.0 slot for some reason despite us only using 1 lane on it.

Using one of the x1 slots on a P55 motherboard limits us to a disappointing 163.8MB/s. In other words, there’s no benefit to even having a 6Gbps drive here. ASUS PLX implementation however fixes that right up - at 336.9MB/s it’s within earshot of Intel’s X58.

It’s also worth noting that you’re better off using your 6Gbps SSD on one of the native 3Gbps SATA ports rather than use a 6Gbps card in a PCIe 1.0 slot. Intel’s native SATA ports read at ~265MB/s - better than the Marvell controller on any PCIe 1.0 slot.

The Test Platforms Random Read Performance is Also Affected
Comments Locked

57 Comments

View All Comments

  • iwodo - Thursday, March 25, 2010 - link

    HDD performance has never really been the centre of discussion. Since they are always slow anyway. But with SSD, it has finally show SATA controller makes a lot of different.

    So what can we expect in future SATA controller? Are there any more performance we can squeeze out.
  • KaarlisK - Thursday, March 25, 2010 - link

    Do the P55 boards allow plugging in a graphics card in one x16 slot, and an IO card in the other x16 slot?
    According to Intel chipset specs, only the server versions of the chipset should allow that.
  • CharonPDX - Thursday, March 25, 2010 - link

    You talk about combining four PCIe 1.0 lanes to get "PCIe 2.0-like performance".

    PCIe doesn't care what generation it is. It only cares about how much bandwidth.

    Four PCIe 1.0 lanes will provide DOUBLE the bandwidth of one PCIe 2.0 lane. (4x250=1000 each way, 1x500=500 each way.)

    The fact that ICH10 and the P/H55 PCHs have 6-8 PCIe 1.0 lanes is more than enough to dwarf the measly 2 PCIe 2.0 lanes the AMD chipset has. (6x250=1500 or 8x250=2000 are both greater than 2*500=1000.) Irregardless, all three chipsets only have 2 GB/s between those PCIe ports and the memory controller.

    Why Highpoint cheaped out and put a two-port SATA 6Gb/s controller on a one-lane PCIe card is beyond me. Even at PCIe 2.0, that's still woefully inadequate. That REALLY should be on a four-lane card. Nobody but an enthusiast is going to buy it right now, and more and more "mainstream" boards are coming with 4-lane PCIe slots.

    By the way, the 4-lane slot on the DX58SO is PCIe 2.0, per http://downloadmirror.intel.com/18128/eng/DX58SO_P...">http://downloadmirror.intel.com/18128/eng/DX58SO_P...

    The fact that you have dismal results on a "1.0 slot" has nothing to do with it being 1.0, and everything to do with available bandwidth. If you put the exact same chip on a PCIe 1.0 4-lane card, you would see identical performance (possibly better, if it's more than enough to saturate 500 MB/s) than your one-lane card in a PCIe 2.0 slot. (I would have liked to see performance numbers running that card on the AMD's PCIe 2.0 one-lane slot.)
  • Anand Lal Shimpi - Thursday, March 25, 2010 - link

    The problem is that all of the on-board controllers and the cheaper add-in cards are all PCIe 2.0 x1 cards.

    Intel's latest DX58SO BIOS lists what mode the PCIe slots are operating in and when I install the HighPoint card in the x4 it lists its operating mode as 2.5GT/s and not 5.0GT/s. The x16 slots are correctly listed as 5.0GT/s.

    Take care,
    Anand
  • qwertymac93 - Thursday, March 25, 2010 - link

    while the sb850 has 2 pci-e 2.0 lanes, the 890gx northbridge has 16 for graphics cards, and another 6 lanes for anything else(thats 24 in total, btw). the southbridge is connected to the northbridge with something similar to 4 pci-e 2.0 lanes, thus 2GB/s (16 gigaBITS/s). i have no idea why you think the "measly" two lanes coming off the southbridge mean anything about its sata performance, nor do i understand why you think the 6 lanes coming off of intels h55(being fed by a slow dmi link) are somehow better.

    P.S. i don't think "irregardless" is a word, its sorta like a self contained double-negative. "ir"= not or without, "regard" care or worth, "less" not or without. "irregardless"= not without care or worth.
  • CharonPDX - Thursday, March 25, 2010 - link

    Both the SB850 and the Intel chipsets have 2 GB/s links between the NB and SB (or CPU and SB, in the P/H55.)

    And you are correct, I was not referring at all to the SB850's onboard SATA controller; solely to its PCIe slots. Six lanes of PCIe 1.0 has more available bandwidth than two lanes of PCIe 2.0. This comes in to play when using an add-in card.

    (Yes, I know "irregardless" isn't a real word, it's just fun to use.)
  • CharonPDX - Thursday, March 25, 2010 - link

    P.S. Go get a Highpoint RocketRAID 640. It has the exact same SATA 6Gb chip as the card you used, but on a x4 connector (and with four SATA ports instead of two, and with RAID. But if you're only running one drive, it should be identical.) Run it in the PCIe 1.0 x4 slot on the P55 board. Compare that to the x4 slot on the 890GX board. I bet you'll see *ZERO* difference when running just one drive.

    In fact, I bet on the 890GX board, you'll see the exact same performance on the RR640 in the x4 slot as on the Rocket 600 in the x1 slot.
  • oggy - Thursday, March 25, 2010 - link

    I would be fun to see some dual C300 action :)
  • wiak - Friday, March 26, 2010 - link

    yes, on both AMD 6Gbps SB850, Marvell 6Bps on AMD and Intel ;)
  • Ramon Zarat - Thursday, March 25, 2010 - link

    Unfortunately, testing with 1 drive give us only 1/3 on the picture.

    To REALLY saturate the SATA3/PCIe bus, 2 drives in stripped RAID 0 should have been used.

    To REALLY saturate everything (SATA3/USB3/PCIe)AT THE SAME TIME, an external SATA3 to USB3 SSD cradle transferring to/from 2 SSD SATA3 drives in stripped RAID 0 should have been used.

    The only thing needed to get a complete and definitive picture to settle this question once and for all would have been 2 more SATA3 SSDs and a cradle...

    Excellent review, but incomplete in my view.

Log in

Don't have an account? Sign up now