DSM 5.0: Evaluating iSCSI Performance

We have already taken a look at the various iSCSI options available in DSM 5.0 for virtualization-ready NAS units. This section presents the benchmarks for various types of iSCSI LUNs on the ioSafe 1513+. It is divided into three parts, one dealing with our benchmarking setup, the second providing the actual performance numbers and the final one providing some notes on our experience with the iSCSI features as well as some analysis of the numbers.

Benchmark Setup

Hardware-wise, the NAS testbed used for multi-client CIFS evaluation was utilized here too. The Windows Server 2008 R2 + Hyper-V setup can run up to 25 Windows 7 virtual machines concurrently. The four LAN ports of the ioSafe 1513+ were bonded together in LACP mode (802.3ad link aggregation) for a 4 Gbps link. Jumbo frame settings were left at default (1500 bytes) and all the LUN / target configurations were left at default too (unless explicitly noted here).

Synology provides three different ways to create iSCSI LUNs, and we benchmarked each of them separately. For the file-based LUNs configuration, we created 25 different LUNs and mapped them on to 25 different targets. Each of the 25 VMs in our testbed connected to one target/LUN combination. The standard IOMeter benchmarks that we use for multi-client CIFS evaluation were utilized for iSCSI evaluation also. The main difference to note is that the CIFS evaluation was performed on a mounted network share, while the iSCSI evaluation was done on a 'clean physical disk' (from the viewpoint of the virtual machine). A similar scheme was used for the block-level Multiple LUNs on RAID configuration also.

For the Single LUN on RAID configuration, we had only one target/LUN combination. Synology has an option to enable multiple initiators to map an iSCSI target (for cluster-aware operating systems), and we enabled that. This allowed the same target to map on to all the 25 VMs in our testbed. For this LUN configuration alone, the IOMeter benchmark scripts were slightly modified to change the starting sector on the 'physical disk' for each machine. This allowed each VM to have its own allocated space on which the IOMeter traces could be played out.

Performance Numbers

The four IOMeter traces were run on the physical disk manifested by mapping the iSCSI target on each VM. The benchmarking started with one VM accessing the NAS. The number of VMs simultaneously playing out the trace was incremented one by one till we had all 25 VMs in the fray. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - 100% Sequential Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Max Throughput - 50% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Random 8K - 70% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Real Life - 65% Reads

Analysis

Synology's claims of 'Single LUN on RAID' providing the best access performance holds true for large sequential reads. In other access patterns, the regular file-based LUNs perform quite well.. However, the surprising aspect is that none of the configurations can actually saturate the network links to the extent that the multi-client CIFS accesses did. In fact, the best number that we saw (in the Single LUN on RAID) case was around 220 MBps compared to the 300+ MBps that we obtained in our CIFS benchmarks.

The more worrisome fact was that our unit completely locked up while processing the 25-client regular file-based LUNs benchmark routine. On the VMs' side, we found that the target simply couldn't be accessed. The NAS itself was unresponsive to access over SSH or HTTP. Pressing the front power button resulted in a blinking blue light, but the unit wouldn't shut down. There was no alternative, but to yank out the power cord in order to shut down the unit. By default, the Powershell script for iSCSI benchmarking starts with one active VM, processes the IOMeter traces, adds one more VM to the mix and repeats the process - this is done in a loop till the script reaches a stage where all the 25 VMs are active and have run the four IOMeter traces. After restarting the ioSafe 1513+, we reran the Powershell script by enabling the 25-client access alone and the benchmark completed without any problems. Strangely, this issue happened only for the file-based LUNs, and the two sets of block-based iSCSI LUN benchmarks completed without any problems. I searched online and found at least one other person reporting a similar issue, albeit, with a more complicated setup using MPIO (multi-path I/O) - a feature we didn't test out here.

Vendors in this market space usually offer only file-based LUNs to tick the iSCSI marketing checkbox. Some vendors reserve block-level LUNs only for their high-end models. So, Synology must be appreciated for bringing block-based LUNs as an available feature to almost all its products. In our limited evaluation, we found that stability could improve for file-based LUNs. Performance could also do with some improvement, considering that a 4 Gbps aggregated link could not be saturated. With a maximum of around 220 MBps, it is difficult to see how a LUN store on the ioSafe / Synology 1513+ could withstand a 'VM boot storm' (a situation where a large number of virtual machines using LUNs on the same NAS as the boot disk try to start up simultaneously). That said, the unit should be able to handle two or three such VMs / LUNs quite easily.

From our coverage perspective, we talked about Synology DSM's iSCSI feature because it is one of the more comprehensive offerings in this market space. If readers are interested, we can process our multi-VM iSCSI for other SMB-targeted NAS units too. It may reveal details of where each vendor stands when it comes to supporting virtualization scenarios. Feel free to sound off in the comments.

DSM 5.0: iSCSI Features Miscellaneous Aspects and Concluding Remarks
Comments Locked

43 Comments

View All Comments

  • bkleven - Friday, August 15, 2014 - link

    Most modern safes have a small hole in the back for cabling, which is usually intended to bring electricity into the safe to power humidity control equipment (and that is usually just a heater). When you are forced to place a safe in a location that is not climate controlled it's pretty important to prevent condensation from occurring anywhere inside.

    I've never looked into the impact that hole has on fire protection (I presume there is some impact) but obviously flooding is an issue unless you spay foam it or use some sort of grommet.
  • bsd228 - Monday, August 18, 2014 - link

    Cannon, for example, provides power and a cat 5 connection on most of their safes. It's not a problem.
  • Beany2013 - Saturday, August 16, 2014 - link

    I doubt the floor of my flat is rated to half a ton of spot weight. Nor that of most SOHO offices in houses or houses converted to office buildings (as with a lot of small town businesses).

    It's a pretty practical solution, though, I'll grant you, if your floor is rated for it.
  • robb.moore - Monday, August 18, 2014 - link

    Engineers have another word for power supplied into perfectly insulated boxes - "ovens" :)
    Great for baking bread, not so good for computers.

    Plus, if it gets hot enough inside, it'll actually cause the insulation to kickoff prematurely rendering the safe useless in a fire. It's a non-trivial balance between heat produced during normal operation and heat resistance during a fire event. DIY and proceed with caution.

    Robb Moore, CEO
    ioSafe Inc.
  • Essence_of_War - Wednesday, August 13, 2014 - link

    Nice review! I certainly don't think I could roll my own one of these!

    I had a question about some of the time scales you present in the misc/concluding remarks section.

    Have you considered testing/reporting RAID1 or RAID10 rebuild times? Or are they so much (and consistently so) faster than the RAID5 times that it isn't particularly interesting?
  • Gigaplex - Wednesday, August 13, 2014 - link

    How does a device like this ensure good thermal transfer such that the hard drives don't overheat under regular use, while still giving good thermal isolation so they don't melt in the event of a fire?
  • jmke - Wednesday, August 13, 2014 - link

    well they have active cooling, check the cooling fans. In case of fire you can see that white stuff "DataCast Insulation" will keep the heat of the fire under control, converting to gas (and thereby taking up the heat)

    I tested the smaller brother (ioSafe 214) with fire and water and filmed it. http://www.madshrimps.be/articles/article/1000593/...

    others have put them in cars , houses, etc https://www.youtube.com/watch?v=OygRpR4qtcM and the drives survive.

    During normal operation the active cooling keeps the drive well within the safe limits.
    If you want to enjoy data disaster recovery service make sure that you take HDDs from the qualified lists.

    you can also just launch up Ubuntu and mount the Synology drives like this to copy your data.
    ideal would be a second unit to just plug and play the drives...
  • Herschel55 - Wednesday, August 13, 2014 - link

    Folks, a normal DS1513+ diskless on Amazon is $780. This is $1600, more than twice the amount. For this price I would invest in a true DR solution that mirrored a normal DS1513+ to a cloud service like S3 or Glacier, or even another DS1513+ offsite. The latest Synology DSM supports all of the above and the strategy covers ALL disasters, natural or otherwise.
  • Gigaplex - Wednesday, August 13, 2014 - link

    This is a 5 bay device, which is usually configured in a RAID5 equivalent. With 4-6TB drives, that's 16-24TB capacity. Finding a cloud provider and Internet uplink capable of transferring that amount of data in a reasonable timeframe is not trivial.
  • robb.moore - Thursday, August 14, 2014 - link

    You're right in the mark Gigaplex. With this unit, 90TB is possible with 2 expansion bays. For people concerned about recovering quickly, it can take months (maybe a year?) to stream 90TB back. And for many cloud providers, they might offer to ship a single HDD back but not an entire array.
    Robb Moore, CEO
    ioSafe Inc.

Log in

Don't have an account? Sign up now