Final Words

AMD’s ATI acquisition was about bringing graphics to the portfolio with the eventual goal of integration into the CPU itself. We’ll see the first of that early next year with Llano. But as AMD goes down this integration route, it needs to make sure that its chipsets are at least up to par with Intel’s. Many have complained about AMD’s South Bridges in the past, but with SB850 we’ve actually seen some real improvement. There still appear to be some strange behaviors and I don’t like that there’s any discrepancy between AMD’s reference board and retail 890GX boards, but these results look very promising.

AMD’s native 6Gbps implementation manages to outperform both Marvell and Intel’s controllers in the 4KB random write test by a substantial margin. AMD’s sequential read speed is lower than the Marvell controller, and random read speed is lower than Intel’s 3Gbps controller. With a bit of work, AMD looks like it could have the best performing SATA controller on the market.

Intel’s X58 still has a few tricks left up its sleeve - it manages to be a very high performing 3Gbps SATA controller. Other than in sequential read speed, it’s even faster than Marvell’s 6Gbps controller with a 6Gbps SSD - although not by much.


Marvell makes the only 6Gbps SSD controller today. By next year that will change.

The P55 and H55 platforms are far less exciting. Any 6Gbps controller connected off the PCH is severely limited by Intel’s use of PCIe 1.0 slots. Unfortunately this means that you’ll have to use the 16 PCIe 2.0 lanes branching off the CPU for any real performance. That either results in you limiting your GPU to only 8 lanes or dropping back down to PCIe 1.0 if you have two graphics cards installed. ASUS’ PLX solution is an elegant workaround for the specific case of a user having two graphics cards and a 6Gbps SATA controller on-board. Our tests show that it does work well.

We have to give AMD credit here. Its platform group has clearly done the right thing. By switching to PCIe 2.0 completely and enabling 6Gbps SATA today, its platforms won’t be a bottleneck for any early adopters of fast SSDs. For Intel these issues don't go away until 2011 with the 6-series chipsets (Cougar Point) which will at least enable 6Gbps SATA.

Write Performance Isn’t Safe Either
Comments Locked

57 Comments

View All Comments

  • vol7ron - Thursday, March 25, 2010 - link

    It would be extremely nice to see any RAID tests, as I've been asking Anand for months.

    I think he said a full review is coming, of course he could have just been toying with my emotions.
  • nubie - Thursday, March 25, 2010 - link

    Is there any logical reason you couldn't run a video card with x15 or x14 links and send the other 1 or 2 off to the 6Gbps and USB 3.0 controllers?

    As far as I am concerned it should work (and I have a geforce 6200 modified to x1 with a dremel that has been in use for the last couple years).

    Maybe the drivers or video bios wouldn't like that kind of lane splitting on some cards.

    You can test this yourself quickly by applying some scotch tape over a few of the signal pairs on the end of the video card, you should be able to see if modern cards have any trouble linking at x9-x15 link widths.
  • nubie - Thursday, March 25, 2010 - link

    Not to mention, where are the x4 6Gbps cards?
  • wiak - Friday, March 26, 2010 - link

    the marvell chip is a pcie 2.0 x1 chip anyway so its limited to that speed regardless of interface to motherboard

    atleast this says so
    https://docs.google.com/viewer?url=http://www.marv...">https://docs.google.com/viewer?url=http..._control...

    same goes for USB 3.0 from NEC, its also a PCIe 2.0 x1 chip
  • JarredWalton - Thursday, March 25, 2010 - link

    Like many computer interfaces, PCIe is designed to work in powers of two. You could run x1, x2, x4, x8, or x16, but x3 or x5 aren't allowable configurations.
  • nubie - Thursday, March 25, 2010 - link

    OK, x12 is accounted for according to this:

    http://www.interfacebus.com/Design_Connector_PCI_E...">http://www.interfacebus.com/Design_Connector_PCI_E...

    [quote]PCI Express supports 1x [2.5Gbps], 2x, 4x, 8x, 12x, 16x, and 32x bus widths[/quote]

    I wonder about x14, as it should offer much greater bandwidth than x8.

    I suppose I could do some informal testing here and see what really works, or maybe do some internet research first because I don't exactly have a test bench.
  • mathew7 - Thursday, March 25, 2010 - link

    While 12x is good for 1 card, I wonder how feasible would 6x do for 2 gfx cards.
  • nubie - Thursday, March 25, 2010 - link

    Even AMD agrees to the x12 link width:

    http://www.amd.com/us-en/Processors/ComputingSolut...">http://www.amd.com/us-en/Processors/Com.../0,,30_2...

    Seems like it could be an acceptable compromise on some platforms.
  • JarredWalton - Thursday, March 25, 2010 - link

    x12 is the exception to the powers of 2, you're correct. I'm not sure it would really matter much; Anand's results show that even with plenty of extra bandwidth (i.e. in a PCIe 2.0 x16 slot), the SATA 6G connection doesn't always perform the same. It looks like BIOS tuning is at present more important than other aspects, provided of course that you're not an x1 PCIe 1.0.
  • iwodo - Thursday, March 25, 2010 - link

    Well we are speaking in terms of Gfx, with So GFX card work instead of 16x, work in 12x. Or even 10x. Thereby saving IO space,just wondering what are the status of PCI-E 3.0....

Log in

Don't have an account? Sign up now