DSM 5.0: Evaluating iSCSI Performance

We have already taken a look at the various iSCSI options available in DSM 5.0 for virtualization-ready NAS units. This section presents the benchmarks for various types of iSCSI LUNs on the ioSafe 1513+. It is divided into three parts, one dealing with our benchmarking setup, the second providing the actual performance numbers and the final one providing some notes on our experience with the iSCSI features as well as some analysis of the numbers.

Benchmark Setup

Hardware-wise, the NAS testbed used for multi-client CIFS evaluation was utilized here too. The Windows Server 2008 R2 + Hyper-V setup can run up to 25 Windows 7 virtual machines concurrently. The four LAN ports of the ioSafe 1513+ were bonded together in LACP mode (802.3ad link aggregation) for a 4 Gbps link. Jumbo frame settings were left at default (1500 bytes) and all the LUN / target configurations were left at default too (unless explicitly noted here).

Synology provides three different ways to create iSCSI LUNs, and we benchmarked each of them separately. For the file-based LUNs configuration, we created 25 different LUNs and mapped them on to 25 different targets. Each of the 25 VMs in our testbed connected to one target/LUN combination. The standard IOMeter benchmarks that we use for multi-client CIFS evaluation were utilized for iSCSI evaluation also. The main difference to note is that the CIFS evaluation was performed on a mounted network share, while the iSCSI evaluation was done on a 'clean physical disk' (from the viewpoint of the virtual machine). A similar scheme was used for the block-level Multiple LUNs on RAID configuration also.

For the Single LUN on RAID configuration, we had only one target/LUN combination. Synology has an option to enable multiple initiators to map an iSCSI target (for cluster-aware operating systems), and we enabled that. This allowed the same target to map on to all the 25 VMs in our testbed. For this LUN configuration alone, the IOMeter benchmark scripts were slightly modified to change the starting sector on the 'physical disk' for each machine. This allowed each VM to have its own allocated space on which the IOMeter traces could be played out.

Performance Numbers

The four IOMeter traces were run on the physical disk manifested by mapping the iSCSI target on each VM. The benchmarking started with one VM accessing the NAS. The number of VMs simultaneously playing out the trace was incremented one by one till we had all 25 VMs in the fray. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - 100% Sequential Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Max Throughput - 50% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Random 8K - 70% Reads

 

ioSafe 1513+ - iSCSI LUN (Regular Files) Multi-Client iSCSI Performance - Real Life - 65% Reads

Analysis

Synology's claims of 'Single LUN on RAID' providing the best access performance holds true for large sequential reads. In other access patterns, the regular file-based LUNs perform quite well.. However, the surprising aspect is that none of the configurations can actually saturate the network links to the extent that the multi-client CIFS accesses did. In fact, the best number that we saw (in the Single LUN on RAID) case was around 220 MBps compared to the 300+ MBps that we obtained in our CIFS benchmarks.

The more worrisome fact was that our unit completely locked up while processing the 25-client regular file-based LUNs benchmark routine. On the VMs' side, we found that the target simply couldn't be accessed. The NAS itself was unresponsive to access over SSH or HTTP. Pressing the front power button resulted in a blinking blue light, but the unit wouldn't shut down. There was no alternative, but to yank out the power cord in order to shut down the unit. By default, the Powershell script for iSCSI benchmarking starts with one active VM, processes the IOMeter traces, adds one more VM to the mix and repeats the process - this is done in a loop till the script reaches a stage where all the 25 VMs are active and have run the four IOMeter traces. After restarting the ioSafe 1513+, we reran the Powershell script by enabling the 25-client access alone and the benchmark completed without any problems. Strangely, this issue happened only for the file-based LUNs, and the two sets of block-based iSCSI LUN benchmarks completed without any problems. I searched online and found at least one other person reporting a similar issue, albeit, with a more complicated setup using MPIO (multi-path I/O) - a feature we didn't test out here.

Vendors in this market space usually offer only file-based LUNs to tick the iSCSI marketing checkbox. Some vendors reserve block-level LUNs only for their high-end models. So, Synology must be appreciated for bringing block-based LUNs as an available feature to almost all its products. In our limited evaluation, we found that stability could improve for file-based LUNs. Performance could also do with some improvement, considering that a 4 Gbps aggregated link could not be saturated. With a maximum of around 220 MBps, it is difficult to see how a LUN store on the ioSafe / Synology 1513+ could withstand a 'VM boot storm' (a situation where a large number of virtual machines using LUNs on the same NAS as the boot disk try to start up simultaneously). That said, the unit should be able to handle two or three such VMs / LUNs quite easily.

From our coverage perspective, we talked about Synology DSM's iSCSI feature because it is one of the more comprehensive offerings in this market space. If readers are interested, we can process our multi-VM iSCSI for other SMB-targeted NAS units too. It may reveal details of where each vendor stands when it comes to supporting virtualization scenarios. Feel free to sound off in the comments.

DSM 5.0: iSCSI Features Miscellaneous Aspects and Concluding Remarks
Comments Locked

43 Comments

View All Comments

  • ddriver - Saturday, August 16, 2014 - link

    Thanks for the input. BTW, where I come from, "cellar" does not imply "basement" - our cellars are usually on floor levels, tiny room, around 1 m^2 for general storage purposes, no water mains no nothing. Closer to what you may call "pantry" in the US. Cultural differences... me'h. Nothing to burn and nothing to flood in there. Plus noise in the cellar bothers nobody.

    Never failed a demo - which exactly proves the point I make earlier about being careful with controlled fires. That would create the illusion your products are flawless and someone's gonna buy one expecting it to survive his fancy wooden and lacquer soaked cottage burning to the ground which I am willing to bet it will not. You should really draw the line between "office fire accidents" and "fire disasters" just for the sake of being more realistic and not deceiving consumers, deliberately or not. People are impressed by big numbers, and could easily be impressed by the 1700 F number, absent the realization most flammable materials burn at significantly higher temperatures. Every engineer knows - there is no such thing as a flawless product, the fact you never had a failed demo only goes to show you never really pushed your products. With those zero failed demos you will very easily give consumers the wrong idea and unrealistic expectations, especially ones who are not educated on the subject. You SHOULD fail a few demos, because it will be beneficial for people to know what your products CAN'T HANDLE. A few failures in extreme cases will not degrade consumer trust as your PR folks might be prone to believing, it will actually make you look more honest and therefore more trustworthy.
  • ddriver - Saturday, August 16, 2014 - link

    I mean better you cross that line with a test unit than some outraged consumer going viral over the internet how your product failed and he lost his life work ;)
  • robb.moore - Monday, August 18, 2014 - link

    Hi ddriver-
    As mentioned, our fireproof tech relies on proven methods that are over 100 years old. Appreciate the heated skepticism though. As an engineer myself, I agree that no product is flawless and everything (including our own products) have their limits. I take back the "never failed a demo" comment. We did some gun demos with shotguns (passed) - got a bunch of flack for ONLY using shotguns so we redid the demo with fully auto AR-15's - blew holes completely through the product and of course failed...sometimes. Fun demo though.
    -Robb
  • Phil Stephen - Monday, August 18, 2014 - link

    ddriver: I'm a firefighter and can confirm that, based on the specs, these units would indeed withstand a typical structure fire.
  • zlobster - Sunday, August 17, 2014 - link

    Dear Mr. Moore,

    I'm really glad that you actually follow up with the public opinion.

    I'll try to use the opportunity and use it to ask you whether you are planning to build a unit with non-Intel CPU, rather with AMD/ARM/Marvell/etc.? A unit that can handle the latest industry standards for on-the-fly encryption without sacrificing the performance even a bit?

    Also, besides the physical integrity and resiliency tests that you are conducting, do you do similar penetration-testing for data integrity? I mean, with well-known independent hacker/pen-testing communities? With all the fuzz around govt. agencies putting backdoors virtually everywhere, it's of extreme importance for the piece of mind of extreme paranoics (like me), to know how hack-proof your devices are.

    Regards,
    Zlob
  • robb.moore - Monday, August 18, 2014 - link

    Hi Zlop-
    We're constantly working on new products. The 1513+ does use an Intel chip but our other NAS (2 bay), ioSafe 214 uses a Marvell chip. We realize that encryption can be important in many situations and we're always interested in balancing features, cost and speed for our products. Can't make direct comments though on what we have in the pipeline except stay tuned! :)

    In regards to "hack-proof", the most "hack-proof" systems (aka CIA, etc.) don't exist on the internet at all. They're fully contained offline in secured facilities. In fact, ioSafe systems are used in situations like this where offsite backup (online or physical relocation) is not allowed or impractical but the end user still wants a disaster plan.

    Obviously, there's a balance between security, accessibility, cost and speed. If you put an ioSafe system online, no firewall, never update OS/firmware, all ports open with standard admin passwords - expect mayhem. If you put an ioSafe system in a bank vault, offline and turned off it's obviously pretty secure but inaccessible - not very useful. Security can be complicated. Striking the right balance is different for every situation. Our NAS systems are based on the Synology platform which in turn is a custom Linux kernel so it's as safe as you make it generally (like all connected systems) and is susceptible to hackers if setup incorrectly. We're very happy to help you with configuring your device if you have any questions at all about the tradeoffs.

    Ultimately though its under your control. You're in charge of opening or closing doors.

    Robb Moore, CEO
    ioSafe Inc.
  • PEJUman - Saturday, August 16, 2014 - link

    I second that, for this price... I really want to know it it would last the rated time under fire. Should try it under propane {bbq tank 2300+ C} and and Nat. Gas/wood based flame {1900+ C} To simulate household/industry gas line.
  • bsd228 - Thursday, August 14, 2014 - link

    Well for the $1000 extra, one could buy a 30"x72" inch gun safe that can also be used to store the regular Synology (or a DIY NAS) plus guns, camera gear, and any other valuables. Harder to steal as well- they weight 500-1000lbs.
  • DanNeely - Thursday, August 14, 2014 - link

    Maybe; but good luck using the NAS with the door shut...
  • smorebuds - Thursday, August 14, 2014 - link

    l0l

Log in

Don't have an account? Sign up now