Earlier this month my Crucial RealSSD C300 died in the middle of testing for AMD’s 890GX launch. This was a problem for two reasons:

1) Crucial’s RealSSD C300 is currently shipping and selling to paying customers. The 256GB drive costs $799.
2) AMD’s 890GX is the first chipset to natively support 6Gbps SATA. The C300 is the first SSD to natively support the standard as well. Butter, meet toast.

Since then, Crucial dispatched a new drive and discovered what happened to my first drive (more on this in a separate update). While waiting for the autopsy report, I decided to look at 890GX 6Gbps performance since it was absent from my original review.


AMD's SB850 with Native 6Gbps SATA

In the 890GX review I found that AMD’s new South Bridge, the SB850, wasn’t quite as fast as Intel’s ICH/PCH when dealing with the latest crop of high performance SSDs. My concerns were particularly about high bandwidth or high IOPS situations, admittedly things that you only bump into if you’re spending a good amount of money on an SSD. Case in point, here is OCZ’s Vertex LE running on an AMD 890GX compared to an Intel X58:

Iometer 6-22-2008 Performance 2MB Sequential Read 2MB Sequential Write 4KB Random Read 4KB Random Write (4K Aligned)
AMD 890GX 248 MB/s 217.5 MB/s 38.4 MB/s 130.1 MB/s
Intel H55 264.9 MB/s 247.7 MB/s 48.6 MB/s 180 MB/s

 

My concern was that if 3Gbps SSDs were underperfoming on the SB850, then 6Gbps SSDs definitely would.

Other reviewers had mixed results with the SB850. Some boards did well while others did worse. I also discovered that AMD’s own internal testing is done on an internal reference board with both Cool’n’Quiet and SB power management disabled, which is why disabling CnQ improved performance in my results. As far as why AMD does any of its own internal testing in such a way, your guess is as good as mine.

I received an ASUS 890GX board for this followup and updated to the latest BIOS on that board. That didn’t fix my performance problems. Using AMD’s latest SB850 AHCI drivers however (1.2.0.164), did...sort of:

Iometer 6-22-2008 Performance 2MB Sequential Read 2MB Sequential Write 4KB Random Read 4KB Random Write (4K Aligned)
AMD 890GX (3/2/10) 248 MB/s 217.5 MB/s 38.4 MB/s 130.1 MB/s
AMD 890GX (3/25/10) 253.5 MB/s 223.8 MB/s 51.2 MB/s 152.1 MB/s
Intel H55 264.9 MB/s 247.7 MB/s 48.6 MB/s 180 MB/s

 

All performance improved, but we’re still looking at lower performance compared to Intel’s 3Gbps SATA controller except for random read speed. Random read speed is faster on the 890GX (but slower than X58).

The best part of it all is that I no longer had to disable CnQ or C1E to get this performance. I will note that my performance is still lower than what AMD is getting on its internal reference board and the performance from 3rd party boards varies significantly from one board to the next depending on board and BIOS revisions. But at least we’re getting somewhere.

In testing the 890GX, I decided to look into how Intel’s chipsets perform with this new wave of high performance SSDs. It’s not as straightforward as you’d think.

The Primer: PCI Express 1.0 vs. 2.0
POST A COMMENT

56 Comments

View All Comments

  • Shadowmaster625 - Tuesday, March 30, 2010 - link

    It sounds like AMD made a conscious decision to focus on maximum random write performance, even if it required sacrificing all other key performance metrics. I hope that is the case, because it is pretty sad that their 6 gbps controller is generally outperformed by a 3 gbps controller! Reply
  • astewart999 - Tuesday, March 30, 2010 - link

    When talking performance, why are they not mentioning RAID 0. I suspect SATA3 is not capable? Reply
  • astewart999 - Tuesday, March 30, 2010 - link

    Ignore my ignorance, I read the article then posted. Should have read the posts and ignored the article! Reply
  • nexox - Monday, April 05, 2010 - link

    Just get a SASII (6Gbit) PCI-E HBA (LSI makes one, probably others) - plenty of speed, they generally run on a PCI-E 8x slot, and you can run SATA drives in them just fine. Plus they tend not to cost too much more than the consumer-level SATA adaptors, which are apparently questionable performance-wise. They'd at least make a good baseline for comparison. Reply
  • supremelaw - Saturday, April 17, 2010 - link

    RS2BL040 and RS2BL080 are now at Newegg:

    http://www.newegg.com/Product/Product.aspx?Item=N8...
    http://www.intel.com/products/server/raid-controll...

    http://www.newegg.com/Product/Product.aspx?Item=N8...
    http://www.intel.com/products/server/raid-controll...

    Before buying, confirm whether or not TRIM will work with SSDs in RAID modes.

    http://www.pcper.com/comments.php?nid=8538

    *** UPDATE ***

    The unconfirmed bit has been confirmed as unconfirmed from Intel:

    “Intel® RST 9.6 supports TRIM in AHCI and pass through modes for RAID. A bug has been submitted to change the string that indicates TRIM is supported on RAID volumes (0,1,5,10). Intel is continuing to investigate the ability of providing TRIM support for all RAID volumes in a future release”

    Looks like we'll have to wait a little longer for TRIM through RAID, but there *are* other SSD-specific improvements in this new driver.

    *** END UPDATE ***

    MRFS
    Reply
  • chrcoluk - Saturday, June 19, 2010 - link

    Ok my thoughts.

    1 - you written of pcie v1 however failed to notice or mention that the plx chip uses pci-e 1x lanes from the p55 chipset so clearly pci-e 1.0 can supply the bandwidth if utilised properly, the plx chip transfers 4 1.0 lanes into 2 virtual 2.0 lanes for the sata6g and usb3.
    2 - some p55 boards, mine noticebly have a pci 2.0 slot fed of the p55 chipset @ x4 speed. Seems reviewers have got something wrong or are they claiming asus have it wrong? Even if we assume its actually pci 1.0 x4 that is still enough bandwidth to feed a sata 6g controller. Indeed the onboard plx which you praised sacrifices this x4 pci-e slot and uses those 4 lanes to feed itself. My thoery is the U3S6 card asus sell will perform the same as the onboard plx in a x4 slot but no reviewer has tested this properly.
    3 - whats the reason you did not test both gigabytes onboard and the lower asus onboard which borrow bandwidth from the primary pci-e x16 lanes, I am looking for tests of those in both turbo/levelup and normal mode.
    Reply

Log in

Don't have an account? Sign up now