Testing the LSI SAS 2308 PCIe Controller

As part of this review, ASRock were kind enough to provide a set of eight ADATA XPG SX910 256GB drives in order to test the LSI SAS/SATA ports on the motherboard.  These drives are rated for 500 MBps+ sequential read and write speeds, and are based on the LSI SF-2281 controller.  Due to the routing of the LSI chip via PCIe 3.0 lanes, our bandwidth ceiling is lifted as we do not have to go via the chipset.

The LSI SAS controller supports SAS2 and SATA 6 Gbps drives.  This is in comparison to the C60x server based chipsets for Sandy Bridge-E and Xeon processors based on the 2011 socket, which allow for up to eight SAS/SATA drives limited to SATA 3 Gbps and chipset throughput. 

The LSI controller allows for RAID 0, 1 and 10 only, which is a little odd.  I would have expected some users to require RAID 5 or 6.  Running eight drives in RAID 0 is a unique real world scenario, due to the fail rate potentially wiping all the data, but it does allow us to study peak throughput.  Thus the preferred scenario here will be RAID 10 across eight drives, thus taking advantage of striping across 4 of the drives but also having them mirrored.

The LSI platform is very easy to use.  With the drives plugged in and powered (I had to rummage for suitable power connectors), the LSI software provided (MegaRAID) on the ASRock Driver DVD is loaded and run.  After a login screen, we are presented with the following:

In order to create array, we select the ‘Create Virtual Drive’ option.  We then get an option to select either an Easy mode for creating the virtual drive, or an advanced mode.  Both are very limited in their options, especially as we can only choose one stripe size – 64KB.

The virtual drive then needs to be mounted in Windows, using the Disk Management tool.

The LSI chip by default is set to run in PCIe 3.0 mode in the BIOS.  There is also an option to allow booting from the LSI chip, via Port 7.  This unfortunately adds a large amount of time to the booting of the system, around 40-50 more seconds.

With this in mind, we tested several scenarios for bandwidth and 4K performance with the LSI chip.  As the LSI has no access to additional cache like a PCIe card, we will see that the peak performance lies more in bulk sequential transfers, rather than small transfers.

For the tests, we used the standard ATTO benchmark and the following scenarios in RAID-0:

LSI Chip with 1, 2, 3, 4, 5, 6, 7, 8 drives (listed as SAS x) via MegaRAID
Chipset with 1, 2 drives on SATA 6 Gbps (listed as x+0)
Chipset with 1, 2, 3, 4 drives on SATA 3 Gbps (listed as (0+x)
Chipset with 1 drive on SATA 6 Gbps and 1, 2, 3, 4 drives on SATA 3 Gbps (listed as 1+x)
Chipset with 2 drives on SATA 6 Gbps and 1, 2, 3, 4 drives on SATA 3 Gbps (listed as 2+x)

All chipset scenarios were set using the RAID from BIOS rather than in the OS.

In terms of peak speeds using ATTO, we recorded the following results:

There are plenty of conclusions to draw from this – for peak throughput, the LSI ports are preferred.  When using the chipset connections, going from 2+0 to 2+1 will bring a drop in performance.  Also worthy of note is our top speed with eight drives in RAID-0 – we hit 4.0 GBps read and 3.72 GBps write.  We can see more by looking at how the chipset deals with reads and writes differently to the LSI chip:

In terms of write speeds, all the drive configurations perform nearly the same until the advantage of more drives takes over, and the LSI configurations of 3+ pull ahead.  However, in the read speeds, all the chipset configurations that feature at least one SATA 6 Gbps drive have distinctly better read speeds below 64 KB transfer size.  This could be due to some chipset caching or clever manipulation of the data.  Thus for standard OS drives, putting two drives on the chipset SATA 6 Gbps ports will be more beneficial than using the LSI chip.

In terms of scaling, the LSI chip has the following 3D surfaces for read and write:

In both read and write, we only see any relevant scaling of drives beyond 128KB transfers.  The large the transfer, the more the scaling becomes almost 1:1 as more drives are added.

Ultimately though, this may not be the best use case scenario for the product.  As mentioned above, perhaps an eight drive RAID 10 array could be utilized with SAS drives, whereby the drive is more relevant to the speed than the port is (despite ASRock advertising stating peak throughputs).

Gaming Benchmarks Final Words
Comments Locked

62 Comments

View All Comments

  • Azethoth - Monday, September 3, 2012 - link

    "a SAS". "an" is for words starting with vowels like "an error", "a" is for words starting with consonants like "a Serial Attached SCSI" or "a Storage Area Network" or "a SAS"*. It rolls off the tongue better when you don't have adjacent vowels.

    *Your particular English implementation may have different rules, these were the ones I grew up with. I find them simple and easy to apply.
  • lukarak - Tuesday, September 4, 2012 - link

    That's not entirely true.

    It would be an 'a' if you read it as 'a sas'. But with SAS, we usually pronounce it as S A S, and then it goes with 'an'.
  • ahar - Tuesday, September 4, 2012 - link

    Who's "we"? It doesn't include me. Why use three syllables when one will do?
    Do you also talk about R A M, or R A I D arrays, or an L A N?
  • Death666Angel - Tuesday, September 4, 2012 - link

    Like lukarak said, that is not true. The English language uses "an", when the word following it starts with a vowel sound. That doesn't necessarily mean it has a vowel as the first character (see hour).

    As for abbreviations, there is no rule for it. Some people pronounce them like a single word, others don't. I use LAN, RAM, RAID as a word, but pronounce SAS as S.A.S. and SATA as S.ATA for example and SNES as S.NES. You can't appease both groups. So I think the writer of the article should go with whatever he feels most comfortable with, so that he avoids flipping between things unconsciously.
  • Death666Angel - Monday, September 3, 2012 - link

    "If you believe the leaks/news online about an upcoming single slot GTX670, or want to purchase several single slot FirePro cards, then the ASRock will give you all that bandwidth as long as the user handles the heat."
    I'd probably slap some water coolers on there. Insane setup :D.
  • tynopik - Monday, September 3, 2012 - link

    Is it even confirmed that this Ivy Bridge-E is coming out?
  • shunya901 - Monday, September 3, 2012 - link


    ..............\.............\....http://www.frankfushi.com/
    commonprosperity.org@hotmail.com
    == ( http://commonprosperity.org )==
    you can find many cheap and fashion stuff
    jordan air max oakland raiders $30–39;
    Ed Hardy AF JUICY POLO Bikini $20;
    Handbags (Coach lv fendi d&g) $30
    T shirts (Polo ,edhardy,lacoste) $15
    Jean(True Religion,edhardy,coogi) $30
    Sunglasses (Oakey,coach,gucci,Armaini) $15
    New era cap $15
  • ypsylon - Tuesday, September 4, 2012 - link

    But little is delivered.

    1. Primitive RAID option. Without even small cache it is as useful as Intel Storage Matrix RAID. Of course for R 1/10 parity calculations are not required so lack of XOR chip isn't an issue, but believe me even 128 MB of cache would improve performance greatly.
    2. They bolted 8 SATA/SAS ports to the board instead using standard server oriented SFF-8087 connector. You get one cable running 4 drives not 4 separate cables for each separate drive. Very clumsy solution. And very, very cheap. Exactly what I expect of ASR.
    3. If someone wants RAID buy a proper hardware controller, even for simple setups of R1/10 - plenty of choice on the market. When you change the board in the future you just unplug controller from old board and plug it into new one. No configuration is needed, all arrays remain the same. Idea of running RAID off the motherboard is truly hilarious, especially if somebody change boards every year or two.
    4. Fan on south bridge (or the only bridge as north bridge is in the CPU now? ;) ). Have mercy!
    5. They pretend it is WS oriented board yet they equip it with lame Broadcom NICs. Completely clueless, that kind of inept reasoning is really typical of ASRock.
    6.And finally why persist with ATX. At least E-ATX would be better choice. Spacing some elements wouldn't hurt. Especially with 7 full PCI-Ex slots. Impossible to replace RAM when top slot is occupied, and with really big VGAs it really is tight squeeze between CPU, RAM and VGA. Why not drop top slot to allow air to circulate. Without proper cooling in the case there will be a pocket of hot air which will never move.

    To sum up. Bloody expensive, dumb implementation of certain things, and cheaply made. Like 99% of ASRock products. Cheap Chinese fake dressed like Rolls-Royce. In short:stay away.
  • dgingeri - Tuesday, September 4, 2012 - link

    1. Many server manufacturers equip their small business servers with a low end chip like that because of cost. Small businesses, like those who would build their own workstation class machines, have to deal with a limited budget. This works for this market space.

    2. I don't see any sign of a SFF-8087 port or cable. I see only SATA ports. Honestly, I would have preferred a SFF-8087 port/cable, as my Dell H200 in my Poweredge T110 II uses. It would take up less real estate on the board and be more manageable. I know this from experience.

    3. Yeah, the Dell H200 (or it's replacement H310) has plenty of ports (8) and runs <$200 yet any hardware raid controller with a cache would run $400 for 4 ports or about $600 for 8. (I have a 3ware 9750 in my main machine that ran me $600.) Depending on your target market, cost could matter. They get what they can with the budget they have.

    4. I'd have to agree with you on the fan, but there's also the little matter of keeping clearance for the video cards top populate the slots. Take off the decorative plate and make the heatsink bigger, and they could probably do without the fan. Unfortunately, there are lots of stupid people out there who buy things on looks rather than capability.

    5. Broadcom NICs are vastly superior to the Realtek or Atheros NICs we usually see on DIY boards. I would be happier to see Intel NICs, but Broadcom is still the second best on the market. I have 2 dual port Broadcom NICs in my Dell T110 II machine (which I use as a VMWare ESXI box to train up for certification and my home server.)They work quite well, as long as you don't use link aggregation.

    6. Many people wouldn't be able to afford a case that would handle E-ATX, especially the target market for this board.

    For the target market, DIY part time IT guy for a small business trying to make a decent CAD station or graphics workstation, it would work fairly well. I'm just not sure about the reliability factor, which would cost a small business big time. I'd say stay away just on that factor. Do with a little less speed and more reliability if you want to stay in business. Dell makes some nice IB workstations that would be perfectly reliable, but wouldn't be as speedy as a SB-E machine.
  • 08solsticegxp - Sunday, June 9, 2013 - link

    You have to realize, this board is not a server board. If it was designed for that, I'm sure they would have two sockets. Also, it is much cheaper to add the LSI chip to the board than have it as an add-on card. If it was an add-on card... where do you expect it to go when using 4 Video cards?
    I think the board is designed very well for what it was intended for. You may want to consider looking at design as it relates to the intended purpose... Not, some other purpose.

    I will agree to say I would have liked to see a Raid 5 option on the RAID controller. However, looking at the price of an LSI (who are noted for being a high quality RAID controller) it is pretty pricey when you start getting to the controllers that have RAID 5 as an option.

Log in

Don't have an account? Sign up now