Pricing, Continued

We wanted to calculate how much a small SAN with failover would cost. We assumed that four servers would share a dual controller SAN. We opted for a 16-port switch as we assume that additional servers will use this SAN in the future, and 16-port switches probably give the best port/price ratio. Note also that we can easily expand our 12 disk SAN with several JBODs if those servers need more disk capacity.

First, we checked out several tier one storage vendors. To keep things simple, we made an average of the prices that we encountered at Dell/EMC, IBM, and HP at the end of October 2007. The table below is not a precise calculation or a "best buy" recommendation; it is simply an estimate to give us a reasonable overview of the costs.


Several things make a typical FC SAN quite expensive. One of most important ones is the high quality, very low latency FC Switch (a Brocade Silkworm for example). Secondly, the FC HBA required for each server that gets access to the SAN is rather expensive. Other small components also quickly push the cost higher: LC optic cables are still expensive and each link between your switch and the storage rack needs a small form-factor pluggable (SFP). These compact optical transceivers are yet another cost that is usually not included in your storage rack.


SFPs add to the price of the already expensive FC SAN

The result is that for a relatively simple HA SAN configuration with less than 1.7 TB of raw storage capacity, the total cost quickly rises to $35,000 or more. It is nearly impossible to get under $20,000, even without double path HA.

Let us compare this to a SAN based on a storage appliance that leaves all options open. We tried to keep the components the same as much as possible:
  • A Brocade M4400 FC Switch
  • Seagate ST3146755SS 146GB SAS 15K RPM hard drive
  • FC HBA: Emulex LPe1150-F4

The idea is clear: you save a lot of money if you can pick your own switch, your own hard disks, and your own HBAs. In both configurations (HA and no HA) the Promise configuration is significantly less expensive (25-30% less) than a typical tier one configuration. Of course, it may take a bit more effort to put your configuration together depending on your skill. You also need one reliable reseller who can sell you everything, so you have one point of contact if something goes wrong. Even with this stipulations, you can save quite a bit of money.

If this is still too expensive, iSCSI comes to the rescue. iSCSI appliances are not much cheaper than FC appliances; in fact, in some cases they are priced almost as high as their FC counterparts. However, the pricing of switches, cables, and HBAs is significantly lower. That allows you to build a basic SAN for less than $10000.


Intel's SSR212MC2 barebones starts at prices as low as $2500, bringing the price of a basic storage device down ~$3500. Naturally, you have to install the iSCSI software yourself. If you feel that's either too time consuming or too difficult, quite a few resellers offer complete ready-to-use iSCSI boxes based on the Intel SSR212MC2.
Pricing Looking Under the Hood
Comments Locked

21 Comments

View All Comments

  • Anton Kolomyeytsev - Friday, November 16, 2007 - link

    Guys I really appreciate you throwing away StarWind! W/o even letting people know what configuration did you use, did you enable caching, did you use flat image files, did you map whole disk rather then partition, what initiator did you use (StarPort or MS iSCSI), did you apply recommended TCP stack settings etc. Probably it's our problem as we've managed to release the stuff people cannot properly configure but why did not you contact us telling you have issues so we could help you to sort them out?

    With the WinTarget R.I.P. (and MS selling it's successor thru the OEMs only), StarWind thrown away and SANmelody and IPStor not even mentioned (and they are key players!) I think your review is pretty useless... Most of the people are looking for software solutions when you're talking about "affordable SAN". Do you plan to have second round?

    Thanks once again and keep doing great job! :)

    Anton Kolomyeytsev

    CEO, Rocket Division Software
  • Johnniewalker - Sunday, November 11, 2007 - link

    If you get a chance, it would be great to see what kind of performance you get out of an iscsi hba, like the one from qlogic.

    When it gets down to it, the DAS numbers are great for a baseline, but what if you have 4+ servers running those io tests? That's what shared storage is for anyhow. Then compare the aggregate io vs DAS numbers?

    For example, can 4 servers can hit 25MB/s each in the SQLio random read 8kb test for a total of 100MB/s ? How much is cpu utilization reduced with one or more iscsi hba in each server vs the software drivers? Where/how does the number of spindles move these numbers? At what point does the number of disk overwhelm one iscsi hba, two iscsi hba's, one FC hba, two FC hbas, and one or two scsi controllers?

    IMHO iscsi is the future. Most switches are cheap enough that you can easily build a seperate dedicated iscsi network. You'd be doing that if you went with fiber channel anyhow, but at a much higher expense (and additional learning curve) if you don't already have it, right?

    Then all we need is someone who has some really nice gui to manage the system - a nice purdy web interface that runs on a virtual machine somewhere, that shows with one glance the health, performance, and utilization of your system(s).

    System(s) have Zero faults.
    Volume(s) are at 30.0 Terabytes out of 40.00 (75%)
    CPU utilization is averaging 32% over the last 15 minutes.
    Memory utilization is averaging 85% over the last 15 minutes.
    IOs peaked at 10,000 (50%) and average 5000 (25%) over the last 15 minutes.

    Pinch me!

    -johhniewalker
  • afan - Friday, November 9, 2007 - link

    You can get one of the recently-released 10Gbps PCI-E TCP/IP card for <$800, and they support iSCSI.

    here's one example:
    http://www.intel.com/network/connectivity/products...">http://www.intel.com/network/connectivi...oducts/p...
    The chip might be used by Myricom and others, (I'm not sure), and there's a linux and a bsd driver - a nice selling point.

    10gb ethernet is what should really change things.
    They look amazing on paper -- I'd love to see them tested:
    http://www.intel.com/network/connectivity/products...">http://www.intel.com/network/connectivi...ucts/ser...
  • JohanAnandtech - Saturday, November 10, 2007 - link

    The problem is that currently you only got two choices: expensive CX4 copper which is short range (<15 m) and not very flexible (it is a like infiniband cables) or Optic fiber cabling. Both HBAs and cables are rather expensive and require rather expensive switches (still less than FC, but still). So you the price gap with FC is a lot smaller. Of course you have a bit more bandwidth (but I fear you won't get much more than 5 GBit, has to be test of course), and you do not need to learn fc.

    Personally I would like to wait for 10 gbit over UTP-cat 6... But I am open to suggestion why the current 10 gbit would be very interesting too.
  • afan - Saturday, November 10, 2007 - link

    Thanks for your answer, J.

    first, as far as I know, CX4 cables aren't as cheap as cat_x, but they aren't all _that_ expensive to be a showstopper. If you need more length, you can go for the fibre cables -- which go _really_ far:
    http://www.google.com/products?q=cx4+cable&btn...">http://www.google.com/products?q=cx4+ca...amp;btnG...

    I think the cx4 card (~$800)is pretty damn cheap for what you get: (and remember it doesn't have pci-x limitations).
    Check out the intel marketing buzz on iSCSI and the junk they're doing to speed up TCP/IP, too. It's good reading, and I'd love to see the hype tested in the real world.

    I agree with you that UTP-cat 6 would be much better, more standardized, much cheaper, better range, etc. I know that, but if this is we've got now, so be-it, and I think it's pretty killer, but I haven't tested it : ).

    Dell, cisco, hp, and others have CX4 adapters for their managed switches - they aren't very expensive and go right to the backplane of the switch.

    here are some dell switches that support CX-4, at least:
    http://www.dell.com/content/products/compare.aspx/...">http://www.dell.com/content/products/co...er3?c=us...

    these are the current 10gbe intel flavors:
    copper: Intel® PRO/10GbE CX4 Server Adapter
    fibre:
    Intel® PRO/10GbE SR Server Adapter
    Intel® PRO/10GbE LR Server Adapter
    Intel® 10 Gigabit XF SR Server Adapters

    a pita is the limited number of x8 PCI-E slots in most server mobos.
    keep up your great reporting.
    best, nw
  • somedude1234 - Wednesday, November 7, 2007 - link

    First off, great article. I'm looking forward to the rest of this series.

    From everything I've read coming out of MS, the StorPort driver should provide better performance. Any reason why you chose to go with SCSIPort? Emulex offers drivers for both on their website.
  • JohanAnandtech - Thursday, November 8, 2007 - link

    Thanks. It is something that Tijl and myself will look into, and report back in the next article.
  • Czar - Wednesday, November 7, 2007 - link

    Love that anandtech is going into this direction :D

    Realy looking forward to your iscsi article. Only used fiber connected sans, have a ibm ds6800 at work :) Never used iscsi but veeery interested into it, what I have heard so far is that its mostly just very good for development purposes, not for production enviroments. And that you should turn of I think chaps or whatever it its called on the switches, so the icsci san doesnt overflow the network with are you there when it transfers to the iscsi target.
  • JohanAnandtech - Thursday, November 8, 2007 - link

    quote:

    Love that anandtech is going into this direction :D


    Just wait a few weeks :-). Anandtech IT will become much more than just one of the many tabs :-)

    quote:

    And that you should turn of I think chaps or whatever it its called on the switches, so the icsci san doesnt overflow the network with are you there when it transfers to the iscsi target.


    We will look into it, but I think it should be enough to place your iSCSI storage on a nonblocking switch on separate VLAN. Or am I missing something?

  • Czar - Monday, November 12, 2007 - link

    think I found it
    http://searchstorage.techtarget.com/generic/0,2955...">http://searchstorage.techtarget.com/generic/0,2955...

    "Common Ethernet switch ports tend to introduce latency into iSCSI traffic, and this reduces performance. Experts suggest deploying high-performance Ethernet switches that sport fast, low-latency ports. In addition, you may choose to tweak iSCSI performance further by overriding "auto-negotiation" and manually adjusting speed settings on the NIC and switch. This lets you enable traffic flow control on the NIC and switch, setting Ethernet jumbo frames on the NIC and switch to 9000 bytes or higher -- transferring far more data in each packet while requiring less overhead. Jumbo frames are reported to improve throughput as much as 50%. "

    This is what I was talking about.

    Realy looking forward to the next article :)

Log in

Don't have an account? Sign up now