DSM 5.0: Evaluating iSCSI Performance

We have already taken a look at the various iSCSI options available in DSM 5.0 for virtualization-ready NAS units. This section presents the benchmarks for various types of iSCSI LUNs on the ioSafe 1513+. It is divided into three parts, one dealing with our benchmarking setup, the second providing the actual performance numbers and the final one providing some notes on our experience with the iSCSI features as well as some analysis of the numbers.

Benchmark Setup

Hardware-wise, the NAS testbed used for multi-client CIFS evaluation was utilized here too. The Windows Server 2008 R2 + Hyper-V setup can run up to 25 Windows 7 virtual machines concurrently. The four LAN ports of the ioSafe 1513+ were bonded together in LACP mode (802.3ad link aggregation) for a 4 Gbps link. Jumbo frame settings were left at default (1500 bytes) and all the LUN / target configurations were left at default too (unless explicitly noted here).

Synology provides three different ways to create iSCSI LUNs, and we benchmarked each of them separately. For the file-based LUNs configuration, we created 25 different LUNs and mapped them on to 25 different targets. Each of the 25 VMs in our testbed connected to one target/LUN combination. The standard IOMeter benchmarks that we use for multi-client CIFS evaluation were utilized for iSCSI evaluation also. The main difference to note is that the CIFS evaluation was performed on a mounted network share, while the iSCSI evaluation was done on a 'clean physical disk' (from the viewpoint of the virtual machine). A similar scheme was used for the block-level Multiple LUNs on RAID configuration also.

For the Single LUN on RAID configuration, we had only one target/LUN combination. Synology has an option to enable multiple initiators to map an iSCSI target (for cluster-aware operating systems), and we enabled that. This allowed the same target to map on to all the 25 VMs in our testbed. For this LUN configuration alone, the IOMeter benchmark scripts were slightly modified to change the starting sector on the 'physical disk' for each machine. This allowed each VM to have its own allocated space on which the IOMeter traces could be played out.

Performance Numbers

The four IOMeter traces were run on the physical disk manifested by mapping the iSCSI target on each VM. The benchmarking started with one VM accessing the NAS. The number of VMs simultaneously playing out the trace was incremented one by one till we had all 25 VMs in the fray. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - 100% Sequential Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Max Throughput - 50% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Random 8K - 70% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Real Life - 65% Reads

Analysis

Synology's claims of 'Single LUN on RAID' providing the best access performance holds true for large sequential reads. In other access patterns, the regular file-based LUNs perform quite well.. However, the surprising aspect is that none of the configurations can actually saturate the network links to the extent that the multi-client CIFS accesses did. In fact, the best number that we saw (in the Single LUN on RAID) case was around 220 MBps compared to the 300+ MBps that we obtained in our CIFS benchmarks.

The more worrisome fact was that our unit completely locked up while processing the 25-client regular file-based LUNs benchmark routine. On the VMs' side, we found that the target simply couldn't be accessed. The NAS itself was unresponsive to access over SSH or HTTP. Pressing the front power button resulted in a blinking blue light, but the unit wouldn't shut down. There was no alternative, but to yank out the power cord in order to shut down the unit. By default, the Powershell script for iSCSI benchmarking starts with one active VM, processes the IOMeter traces, adds one more VM to the mix and repeats the process - this is done in a loop till the script reaches a stage where all the 25 VMs are active and have run the four IOMeter traces. After restarting the ioSafe 1513+, we reran the Powershell script by enabling the 25-client access alone and the benchmark completed without any problems. Strangely, this issue happened only for the file-based LUNs, and the two sets of block-based iSCSI LUN benchmarks completed without any problems. I searched online and found at least one other person reporting a similar issue, albeit, with a more complicated setup using MPIO (multi-path I/O) - a feature we didn't test out here.

Vendors in this market space usually offer only file-based LUNs to tick the iSCSI marketing checkbox. Some vendors reserve block-level LUNs only for their high-end models. So, Synology must be appreciated for bringing block-based LUNs as an available feature to almost all its products. In our limited evaluation, we found that stability could improve for file-based LUNs. Performance could also do with some improvement, considering that a 4 Gbps aggregated link could not be saturated. With a maximum of around 220 MBps, it is difficult to see how a LUN store on the ioSafe / Synology 1513+ could withstand a 'VM boot storm' (a situation where a large number of virtual machines using LUNs on the same NAS as the boot disk try to start up simultaneously). That said, the unit should be able to handle two or three such VMs / LUNs quite easily.

From our coverage perspective, we talked about Synology DSM's iSCSI feature because it is one of the more comprehensive offerings in this market space. If readers are interested, we can process our multi-VM iSCSI for other SMB-targeted NAS units too. It may reveal details of where each vendor stands when it comes to supporting virtualization scenarios. Feel free to sound off in the comments.

DSM 5.0: iSCSI Features Miscellaneous Aspects and Concluding Remarks
Comments Locked

43 Comments

View All Comments

  • Howard - Saturday, August 16, 2014 - link

    I don't know about anyone else, but the "3-2-1 rule" sounds really dumb, especially when the "1" means that you should have the data in TWO different physical locations.
  • jaden24 - Friday, August 29, 2014 - link

    But can it survive a fire, a flood, and still serve up the game Crysis?
  • Mike Kobb - Tuesday, December 16, 2014 - link

    In your closing paragraph, you comment on the fan noise as making the unit suitable for an air conditioned server room.

    I couldn't find any other mention of fan noise in the review. Is it significantly louder than the Synology 1513+ fans? Are they loud under all circumstances, or only when the ambient temperature is high or the unit is heavily loaded? The ioSafe web site lists a range of 25-59 db(A), which is an enormous spread.

Log in

Don't have an account? Sign up now