Promise VTRAK

The Promise VTRAK E310f 2U RBOD has room for up to 12 hot-swappable SAS and SATA hard drives with support for RAID levels 0, 1, 5, 6, 1E and 50. Capacity scalability is not a problem: each RBOD supports up to four additional RBODs, which allows up to 60 hard drives. That is about 18 TB of 15000 RPM 300GB SAS disks or 45TB of 750GB SATA disks. If that is not enough, the 3U VTE610fD will let you use 16 drives and in combination with 3U JBODs, allowing up to 80 drives.

As long as you keep your FC SAN down to a few switches, it should be easy to set up and maintain. With 16-port FC switches, the cost of your SAN should stay reasonable and still give you a huge amount of storage capacity depending on how many servers need to access to the SAN. That is exactly the power of using FC: a few hundred TB of storage capacity is possible. The E310f also supports two 4Gbps FC host ports per controller and two controllers per storage rack, making "dual path" configurations possible. This in turn makes load balancing and failover possible, but not without drivers which understand that there are multiple paths to the same target. Promise does have drivers ready for Windows (based on the MPIO driver development kit) and is working on Linux Multi Path drivers.

The heart of the Promise VTRAK E310f RBOD is the Intel IOP341 CPU (one core). This system-on-a-chip I/O processor is based on the low-power XScale architecture and runs at 1.2 GHz. It has a rather large 512KB L2 for an embedded chip. The XScale chip provides "pure hardware" RAID, including support for RAID 6. RAID 0, 1, 1E, 5, 10, 50, 60 are also supported. Promise by default equips the E310f with 512MB cache of 533 MHz DDR2 (expandable to a maximum 2GB).


Each RBOD can use a dual active/active controller configuration (with failover/failback), or it can use a cheaper single controller configuration.

Intel SSR212MC2: ultra flexible platform

Whether you want an iSCSI target, a NAS, an iSCSI device that used as a NAS fileserver, RBOD, or just a simple JBOD, you can build it with the Intel SSR212MC2. If you want to use it as an iSCSI device, you have several options:
  • Using software like we did. You install SUSE SLES on the internal 2.5" SATA/SAS hard disk and make sure that the iSCSI daemon runs as soon as the machine is booted. If you are an OEM, you buy Microsoft's Windows 2003 Storage Server with Microsoft iSCSI target, or other third party iSCSI targets.
  • Using a SATA, IDE or USB Disk on Module (DOM). If you don't like to administer a full OS, just buy a minimal one on a flash module that attaches to your IDE/USB/SATA connector with a converter which makes it pretend to be disk.


The superbly flexible 2U chassis contains a S5000PSL server board with support for two dual-core (51xx) or quad-core (53xx) Intel Xeon CPUs. In the front are twelve SATA/SAS hard disk bays controlled by the Intel RAID SRCSAS144E controller with 128MB of ECC protected DDR-400 RAM. This controller uses the older Intel IOP333 processor running at 500MHz. That was a small disappointment, as by the time the SSR212MC2 launched the more potent IOP341 was available at speeds up to 1.2 GHz. This chip not only offers a higher clock, but it has a lot more internal bandwidth (6.4GBs vs. 2.7GB/s) and supports hardware enabled RAID 6. Intel's manual claims that a firmware update will enable RAID 6, but we fear that the 500MHz IOP333 might be slightly underpowered to perform RAID 6 quickly. (We'll test this in a later article.) Of course, nothing stops the OEM or you from using a different RAID card.

The S5000PSL provides dual Intel PRO/1000 gigabit Ethernet connections. As it allows you to have up to eight cores in your storage server, you can use this storage server as a regular server performing processing intensive tasks at the same time.


A single or dual redundant 1+1 850W power supply keeps everything powered while 10 hot-swappable fans keep everything cool. If you want to turn this into a simple JBOD, you can buy the SSR212MC2 without the motherboard. The highly integrated VSC410 controller on the enclosure management card works together with the SAS expander (PMC-Sierra PM8388 SXP) to offer 24 ports. You can daisy chain another JBOD onto the first one.
Pricing, Continued Configuration and benchmarking setup
Comments Locked

21 Comments

View All Comments

  • Anton Kolomyeytsev - Friday, November 16, 2007 - link

    Guys I really appreciate you throwing away StarWind! W/o even letting people know what configuration did you use, did you enable caching, did you use flat image files, did you map whole disk rather then partition, what initiator did you use (StarPort or MS iSCSI), did you apply recommended TCP stack settings etc. Probably it's our problem as we've managed to release the stuff people cannot properly configure but why did not you contact us telling you have issues so we could help you to sort them out?

    With the WinTarget R.I.P. (and MS selling it's successor thru the OEMs only), StarWind thrown away and SANmelody and IPStor not even mentioned (and they are key players!) I think your review is pretty useless... Most of the people are looking for software solutions when you're talking about "affordable SAN". Do you plan to have second round?

    Thanks once again and keep doing great job! :)

    Anton Kolomyeytsev

    CEO, Rocket Division Software
  • Johnniewalker - Sunday, November 11, 2007 - link

    If you get a chance, it would be great to see what kind of performance you get out of an iscsi hba, like the one from qlogic.

    When it gets down to it, the DAS numbers are great for a baseline, but what if you have 4+ servers running those io tests? That's what shared storage is for anyhow. Then compare the aggregate io vs DAS numbers?

    For example, can 4 servers can hit 25MB/s each in the SQLio random read 8kb test for a total of 100MB/s ? How much is cpu utilization reduced with one or more iscsi hba in each server vs the software drivers? Where/how does the number of spindles move these numbers? At what point does the number of disk overwhelm one iscsi hba, two iscsi hba's, one FC hba, two FC hbas, and one or two scsi controllers?

    IMHO iscsi is the future. Most switches are cheap enough that you can easily build a seperate dedicated iscsi network. You'd be doing that if you went with fiber channel anyhow, but at a much higher expense (and additional learning curve) if you don't already have it, right?

    Then all we need is someone who has some really nice gui to manage the system - a nice purdy web interface that runs on a virtual machine somewhere, that shows with one glance the health, performance, and utilization of your system(s).

    System(s) have Zero faults.
    Volume(s) are at 30.0 Terabytes out of 40.00 (75%)
    CPU utilization is averaging 32% over the last 15 minutes.
    Memory utilization is averaging 85% over the last 15 minutes.
    IOs peaked at 10,000 (50%) and average 5000 (25%) over the last 15 minutes.

    Pinch me!

    -johhniewalker
  • afan - Friday, November 9, 2007 - link

    You can get one of the recently-released 10Gbps PCI-E TCP/IP card for <$800, and they support iSCSI.

    here's one example:
    http://www.intel.com/network/connectivity/products...">http://www.intel.com/network/connectivi...oducts/p...
    The chip might be used by Myricom and others, (I'm not sure), and there's a linux and a bsd driver - a nice selling point.

    10gb ethernet is what should really change things.
    They look amazing on paper -- I'd love to see them tested:
    http://www.intel.com/network/connectivity/products...">http://www.intel.com/network/connectivi...ucts/ser...
  • JohanAnandtech - Saturday, November 10, 2007 - link

    The problem is that currently you only got two choices: expensive CX4 copper which is short range (<15 m) and not very flexible (it is a like infiniband cables) or Optic fiber cabling. Both HBAs and cables are rather expensive and require rather expensive switches (still less than FC, but still). So you the price gap with FC is a lot smaller. Of course you have a bit more bandwidth (but I fear you won't get much more than 5 GBit, has to be test of course), and you do not need to learn fc.

    Personally I would like to wait for 10 gbit over UTP-cat 6... But I am open to suggestion why the current 10 gbit would be very interesting too.
  • afan - Saturday, November 10, 2007 - link

    Thanks for your answer, J.

    first, as far as I know, CX4 cables aren't as cheap as cat_x, but they aren't all _that_ expensive to be a showstopper. If you need more length, you can go for the fibre cables -- which go _really_ far:
    http://www.google.com/products?q=cx4+cable&btn...">http://www.google.com/products?q=cx4+ca...amp;btnG...

    I think the cx4 card (~$800)is pretty damn cheap for what you get: (and remember it doesn't have pci-x limitations).
    Check out the intel marketing buzz on iSCSI and the junk they're doing to speed up TCP/IP, too. It's good reading, and I'd love to see the hype tested in the real world.

    I agree with you that UTP-cat 6 would be much better, more standardized, much cheaper, better range, etc. I know that, but if this is we've got now, so be-it, and I think it's pretty killer, but I haven't tested it : ).

    Dell, cisco, hp, and others have CX4 adapters for their managed switches - they aren't very expensive and go right to the backplane of the switch.

    here are some dell switches that support CX-4, at least:
    http://www.dell.com/content/products/compare.aspx/...">http://www.dell.com/content/products/co...er3?c=us...

    these are the current 10gbe intel flavors:
    copper: Intel® PRO/10GbE CX4 Server Adapter
    fibre:
    Intel® PRO/10GbE SR Server Adapter
    Intel® PRO/10GbE LR Server Adapter
    Intel® 10 Gigabit XF SR Server Adapters

    a pita is the limited number of x8 PCI-E slots in most server mobos.
    keep up your great reporting.
    best, nw
  • somedude1234 - Wednesday, November 7, 2007 - link

    First off, great article. I'm looking forward to the rest of this series.

    From everything I've read coming out of MS, the StorPort driver should provide better performance. Any reason why you chose to go with SCSIPort? Emulex offers drivers for both on their website.
  • JohanAnandtech - Thursday, November 8, 2007 - link

    Thanks. It is something that Tijl and myself will look into, and report back in the next article.
  • Czar - Wednesday, November 7, 2007 - link

    Love that anandtech is going into this direction :D

    Realy looking forward to your iscsi article. Only used fiber connected sans, have a ibm ds6800 at work :) Never used iscsi but veeery interested into it, what I have heard so far is that its mostly just very good for development purposes, not for production enviroments. And that you should turn of I think chaps or whatever it its called on the switches, so the icsci san doesnt overflow the network with are you there when it transfers to the iscsi target.
  • JohanAnandtech - Thursday, November 8, 2007 - link

    quote:

    Love that anandtech is going into this direction :D


    Just wait a few weeks :-). Anandtech IT will become much more than just one of the many tabs :-)

    quote:

    And that you should turn of I think chaps or whatever it its called on the switches, so the icsci san doesnt overflow the network with are you there when it transfers to the iscsi target.


    We will look into it, but I think it should be enough to place your iSCSI storage on a nonblocking switch on separate VLAN. Or am I missing something?

  • Czar - Monday, November 12, 2007 - link

    think I found it
    http://searchstorage.techtarget.com/generic/0,2955...">http://searchstorage.techtarget.com/generic/0,2955...

    "Common Ethernet switch ports tend to introduce latency into iSCSI traffic, and this reduces performance. Experts suggest deploying high-performance Ethernet switches that sport fast, low-latency ports. In addition, you may choose to tweak iSCSI performance further by overriding "auto-negotiation" and manually adjusting speed settings on the NIC and switch. This lets you enable traffic flow control on the NIC and switch, setting Ethernet jumbo frames on the NIC and switch to 9000 bytes or higher -- transferring far more data in each packet while requiring less overhead. Jumbo frames are reported to improve throughput as much as 50%. "

    This is what I was talking about.

    Realy looking forward to the next article :)

Log in

Don't have an account? Sign up now