DSM 5.0: Evaluating iSCSI Performance

We have already taken a look at the various iSCSI options available in DSM 5.0 for virtualization-ready NAS units. This section presents the benchmarks for various types of iSCSI LUNs on the ioSafe 1513+. It is divided into three parts, one dealing with our benchmarking setup, the second providing the actual performance numbers and the final one providing some notes on our experience with the iSCSI features as well as some analysis of the numbers.

Benchmark Setup

Hardware-wise, the NAS testbed used for multi-client CIFS evaluation was utilized here too. The Windows Server 2008 R2 + Hyper-V setup can run up to 25 Windows 7 virtual machines concurrently. The four LAN ports of the ioSafe 1513+ were bonded together in LACP mode (802.3ad link aggregation) for a 4 Gbps link. Jumbo frame settings were left at default (1500 bytes) and all the LUN / target configurations were left at default too (unless explicitly noted here).

Synology provides three different ways to create iSCSI LUNs, and we benchmarked each of them separately. For the file-based LUNs configuration, we created 25 different LUNs and mapped them on to 25 different targets. Each of the 25 VMs in our testbed connected to one target/LUN combination. The standard IOMeter benchmarks that we use for multi-client CIFS evaluation were utilized for iSCSI evaluation also. The main difference to note is that the CIFS evaluation was performed on a mounted network share, while the iSCSI evaluation was done on a 'clean physical disk' (from the viewpoint of the virtual machine). A similar scheme was used for the block-level Multiple LUNs on RAID configuration also.

For the Single LUN on RAID configuration, we had only one target/LUN combination. Synology has an option to enable multiple initiators to map an iSCSI target (for cluster-aware operating systems), and we enabled that. This allowed the same target to map on to all the 25 VMs in our testbed. For this LUN configuration alone, the IOMeter benchmark scripts were slightly modified to change the starting sector on the 'physical disk' for each machine. This allowed each VM to have its own allocated space on which the IOMeter traces could be played out.

Performance Numbers

The four IOMeter traces were run on the physical disk manifested by mapping the iSCSI target on each VM. The benchmarking started with one VM accessing the NAS. The number of VMs simultaneously playing out the trace was incremented one by one till we had all 25 VMs in the fray. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - 100% Sequential Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Max Throughput - 50% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Random 8K - 70% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Real Life - 65% Reads

Analysis

Synology's claims of 'Single LUN on RAID' providing the best access performance holds true for large sequential reads. In other access patterns, the regular file-based LUNs perform quite well.. However, the surprising aspect is that none of the configurations can actually saturate the network links to the extent that the multi-client CIFS accesses did. In fact, the best number that we saw (in the Single LUN on RAID) case was around 220 MBps compared to the 300+ MBps that we obtained in our CIFS benchmarks.

The more worrisome fact was that our unit completely locked up while processing the 25-client regular file-based LUNs benchmark routine. On the VMs' side, we found that the target simply couldn't be accessed. The NAS itself was unresponsive to access over SSH or HTTP. Pressing the front power button resulted in a blinking blue light, but the unit wouldn't shut down. There was no alternative, but to yank out the power cord in order to shut down the unit. By default, the Powershell script for iSCSI benchmarking starts with one active VM, processes the IOMeter traces, adds one more VM to the mix and repeats the process - this is done in a loop till the script reaches a stage where all the 25 VMs are active and have run the four IOMeter traces. After restarting the ioSafe 1513+, we reran the Powershell script by enabling the 25-client access alone and the benchmark completed without any problems. Strangely, this issue happened only for the file-based LUNs, and the two sets of block-based iSCSI LUN benchmarks completed without any problems. I searched online and found at least one other person reporting a similar issue, albeit, with a more complicated setup using MPIO (multi-path I/O) - a feature we didn't test out here.

Vendors in this market space usually offer only file-based LUNs to tick the iSCSI marketing checkbox. Some vendors reserve block-level LUNs only for their high-end models. So, Synology must be appreciated for bringing block-based LUNs as an available feature to almost all its products. In our limited evaluation, we found that stability could improve for file-based LUNs. Performance could also do with some improvement, considering that a 4 Gbps aggregated link could not be saturated. With a maximum of around 220 MBps, it is difficult to see how a LUN store on the ioSafe / Synology 1513+ could withstand a 'VM boot storm' (a situation where a large number of virtual machines using LUNs on the same NAS as the boot disk try to start up simultaneously). That said, the unit should be able to handle two or three such VMs / LUNs quite easily.

From our coverage perspective, we talked about Synology DSM's iSCSI feature because it is one of the more comprehensive offerings in this market space. If readers are interested, we can process our multi-VM iSCSI for other SMB-targeted NAS units too. It may reveal details of where each vendor stands when it comes to supporting virtualization scenarios. Feel free to sound off in the comments.

DSM 5.0: iSCSI Features Miscellaneous Aspects and Concluding Remarks
Comments Locked

43 Comments

View All Comments

  • ganeshts - Wednesday, August 13, 2014 - link

    I hope we don't have readers chiming in about how they can build a better DIY NAS than the one presented here :)
  • hodakaracer96 - Wednesday, August 13, 2014 - link

    I for one, was hoping for fire and water testing :)
  • Samus - Wednesday, August 13, 2014 - link

    Some good "tests" on youtube:
    https://www.youtube.com/watch?v=qm4J_1jFxik
    https://www.youtube.com/watch?v=yszTblXpwgY
  • ddriver - Thursday, August 14, 2014 - link

    I wouldn't bet money on this product surviving an actual fire. Insulation seems too thin
  • ganeshts - Friday, August 15, 2014 - link

    I hope you are kidding :) ioSafe's products have been proven to work - they have many real world success stories. Quite sure they can't have big-name customers if they don't prove that they can really protect the drives as per the disaster specifications quoted. Just for reference, a picture of one of the 1513+ units subject to both fire and water damage is in our CES coverage: http://www.anandtech.com/show/7684/synology-dsm-50...
  • ddriver - Friday, August 15, 2014 - link

    Well, looking at the youtube videos of fire test I am not really impressed. Surely, it will probably survive a mild and short fire with not much material to burn, but being in a serious blaze and buried in blowing embers it will not last long. A regular NAS unit put in a small concrete cellar with no flammable materials in it has better chances of surviving.

    And this probably has to do with how they test their products, which I can logically assume is safe controlled fires carefully estimated to not exceed the theoretical damage the unit can handle. But how many houses did they torch to test their products in real life disaster situations? My guess is zero :)
  • ddriver - Friday, August 15, 2014 - link

    I mean, it will most likely survive a plastic trash can full of paper catching fire and burning out next to it, but will it survive an actual blaze disaster? I highly doubt it.

    In other words, I don't doubt the product will survive what they claim it can survive, I doubt that the disaster specifications they quote reflect real world fire disasters well enough. They will probably suffice for "fire accidents" but not really in "fire disasters".
  • ganeshts - Friday, August 15, 2014 - link

    Does this convince you?

    http://geardiary.com/2009/08/04/could-your-hard-dr...

    As for real-life situations, they are claiming protection for the following fire situation: 1550°F, 30 minutes per ASTM E-119

    I remember reading a post about some statistics regarding how fast fire services respond to hourhold fires, and ioSafe's protection circumstances fall within that. Anyways, this product is targeted at SMBs / SMEs who have buildings as per fire marshal codes. Any blaze in such a situation is probably going to be controlled well by building sprinklers.
  • ddriver - Friday, August 15, 2014 - link

    It is the 1550 F number that bothers me. That's below 850 C, and even wood and plastic burns at almost 2000 C using air as oxidizer. Most of the stuff that is flammable burns around 1950 C, so targeting the product at 850 C pretty much excludes direct fire damage. E.g. if you have a wooden cabin and if it burns to the ground, the data is very unlikely to recover.

    That is why I drew a line between "accident" and "disaster". This product will do in the case of fire accidents, but in the case of a fire disaster its specs are just not enough.

    So, it is a "fireproof" product for buildings with anti-fire sprinkler installations and with good accessibility for fire services. In short, it doesn't protect in the case of fire disasters, but in the case of fire accidents and the water used to put them out.
  • robb.moore - Friday, August 15, 2014 - link

    Hi ddriver-
    The average cellulose building fire temps are between 800-1000F for about 10-15 minutes. We've been in many fires and have a record or zero data loss for fire disasters in the real world. Most of the building damage is actually caused by firefighter hoses - not the actual fire. The absolute temperature (1500, 1700, 2000...) is not as important as the duration actually. Think of a pot of water boiling on the stove - as long as there's water in the pot, the pot doesn't melt because the endothermic action of the boiling water (212F) keeps the pot from melting. The flame temp could be anywhere between 800 and 3000+? and the water would still boil at 212F (assuming sea level pressures). You could use an aluminum pot (which melts at 1100) and still be ok. Once the water runs dry, then you'll ruin the pot. It's actually the same with all fire safes (and ioSafe). There's water chemically bound to the insulation that works to cool the inner chamber and keeps it at survival temps. Our fire test standards is hotter and longer than typical building fires and the systems we sell typically can go double the standard just to be conservative.

    The fire protection technology is not new. We use the same proven techniques that have been around for 100 years. What unique about ioSafe is how we combine fire/water protection with active computers – managing the heat produced during normal operation while protecting against extreme heat possible during a disaster.

    As Ganesh has said, we test both internally and externally (with the press watching and recording!) in both standard and (ahem) very non-standard ways at times - we've NEVER failed a demo. One of these days, I'm sure a gremlin's gonna pop up and we'll get recorded by the press as failing a disaster demo (because a HDD refuses to boot) but that's the risk we take. Our stuff's legit.

    And btw, a cellar is a great place for tornados and fires but not so good for water main breaks or river floods – we’ve seen it all :)

    Robb Moore, CEO
    ioSafe Inc.

Log in

Don't have an account? Sign up now