Multi-Client iSCSI Evaluation

As virtualization becomes more and more popular even in home / power user settings, the importance of the iSCSI feature set of any COTS NAS can't be overstated. Starting with our ioSafe 1513+ review, we have started devoting a separate section (in the reviews of NAS units targeting SMBs and SMEs) to the evaluation of iSCSI performance. Since we have already looked at the way iSCSI LUNs are implemented in DSM in the ioSafe 1513+ review, it won't be discussed in detail.

We evaluated the performance of the DS1815+ with file-based LUNs as well as configuring a RAID-5 disk group in multiple LUNs mode and single LUN mode. The standard IOMeter benchmarks that we used for multi-client CIFS evaluation were utilized for iSCSI evaluation also. The main difference to note is that the CIFS evaluation was performed on a mounted network share, while the iSCSI evaluation was done on a 'clean physical disk' (from the viewpoint of the virtual machine).

Performance Numbers

The four IOMeter traces were run on the physical disk manifested by mapping the iSCSI target on each VM. The benchmarking started with one VM accessing the NAS. The number of VMs simultaneously playing out the trace was incremented one by one till we had all 25 VMs in the fray. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

Synology DS1815+ - Multiple LUNs (Regular Files) Multi-Client iSCSI Performance - 100% Sequential Reads

 

Synology DS1815+ - Multiple LUNs (Regular Files) Multi-Client iSCSI Performance - Max Throughput - 50% Reads

 

Synology DS1815+ - Multiple LUNs (Regular Files) Multi-Client iSCSI Performance - Random 8K - 70% Reads

 

Synology DS1815+ - Multiple LUNs (Regular Files) Multi-Client iSCSI Performance - Real Life - 65% Reads

Since the number of NAS units that we have put through this evaluation is limited, we have a couple of 4-bay NAS units in the comparison drop-downs above. Unfortunately, we have no graphs for apples-to-apples comparison. That said, we do see the 'single LUN on RAID' mode delivering the best performance. For some strange reason, the multiple LUNs on RAID configuration is never able to take advantage of the bonded network ports. Even the simultaneous multi-client sequential reads test always stay below 125 MBps.

File-based LUNs give maximum flexibility - thin provisioning, support for VMWare VAAI and Windows ODX. The latest versions of DSM have improved on the iSCSI feature set quite a bit. Synology's competitors have a bit of catching up to do in the iSCSI feature set area.

As more NAS units are processed, we hope this section will provide readers with a way to quickly get an idea of the competitive performance of a particular NAS unit when it comes to iSCSI support.

Multi-Client Performance - CIFS on Windows Encryption Support Evaluation
Comments Locked

65 Comments

View All Comments

  • rpg1966 - Tuesday, November 18, 2014 - link

    "Even though the DS1815+ is not as power efficient as the DS1812+..."

    A 1% in power consumption difference is hardly the basis for saying one unit is more or less efficient than the other. In any case, that difference is presumably well within error bars for a test like this.
  • Sivar - Tuesday, November 18, 2014 - link

    Agreed. 1%, if not within the margin of error, is at least well inside the margin of "who cares?"
  • bestham - Wednesday, November 19, 2014 - link

    What Ganesh actually wrote is (paraphrased) "Even if the momentaneous power measurement is slightly higher, the faster rebuild time makes it more power efficient than the last model".

    And in that regard the DS1815+ is better by a lot more than a measly percent.
  • flyingpants1 - Tuesday, November 18, 2014 - link

    For $1050 this is impossible to justify for consumer use.. I'd rather get a $300 computer and have $750 left over for storage..
  • jmke - Tuesday, November 18, 2014 - link

    you are not the target audience :)
  • SirGCal - Tuesday, November 18, 2014 - link

    I agree, but this is targetted at the lazy (or whatever) who don't know how to build a simple array. I have two 8-drive computers myself. One with a deticated card and one without using a ZSF array in RAID 6 and RAIDZ2 formats (each 2-drive redundancy). Both free software and easy to do. But it took me half an hour to setup the systems instead of plugging in 8 drives and 'going'. But my servers can do more then these could. Plus with my older platinum power supplies and systems in low-power mode, they also use extremely low power unless they need it. They can do a lot more too though and I use their capable processing power when I need it without using my other computers. But... I'm not their target either. No way I'd spend that much on a case. Even the server with the array card (very expensive part) didn't cost that much new.
  • asendra - Tuesday, November 18, 2014 - link

    It has nothing to do with laziness. For some people, the hours invested in setting it up, administrating it and troubleshooting it are worth more than what you save by going the DIY route.
    That "half hour" of setting it up is just laughable, unless you have set up already A LOT, but then you would have to count also the time you invested in those installations to learn what you need to do it so efficiently now.
  • SirGCal - Tuesday, November 18, 2014 - link

    Sorry but no. It took me no more then a half hour to do a basic linux install and setup the array. There were NO hours to set it up. Sorry, but that's all it takes... It is extremely strait forward anymore and I've had the computer illiterate setup a linux distro from scratch with no practice and just an install CD in less then an hour. You can google how to specifically setup ZFS arrays in your favorite distro in about 30 seconds and setting up the array takes just about that long also in today's age. It's not even the linux from a few years ago. Everything is faster and better today. So no, it just doesn't take long. Plus, as I said, the raw power capable of doing something other then just hosting files makes the box more useful. Plus I did it with extra computer parts that were sitting around otherwise wasted and going to good will. So in effect it cost me nothing at that point.

    There are 'other' reasons I won't run a Syn array. In the end I'd rather trust proven OS with all the options they allow and security there-in instead of one single provider with specific locked software. I also like the security and recover-ability of ZFS over even my older hardware-raid card.

    But ya, in the end, it comes down to being too ignorant (that's not a bad word, just a state of knowledge) or lazy of how to do it vs just spending the money to let someone else do it for you. But hey, that's how companies like DELL made all of their money too...

    In my case, had all the hardware going to goodwill anyhow, cost $0. But even to buy something basic, $300 TOPS to set up a basic system to handle this. So to me even, $750 would be worth my time to take a bit to do it from scratch. I make a darn good living myself (enough to build a lot of personal computers and give $ away every holiday) but not $750/hour or even half that for two or three hours of time to do it.
  • vol7ron - Tuesday, November 18, 2014 - link

    The thing you're forgetting is that all the time you've invested researching the components and deciding which parts you want is also what goes into building the system. Even if you were to google and look up the parts used in Synology and try to mimic what they have, it takes time. Then there's the time for shipment for the individual parts, instead of just the system and the additional drives. It all factors in, which you are ignoring. Then there are people that value their spare time at higher than you. Then there are people that value the Synology software, or don't want to worry about security holes - you call it ignorance, others might call it insurance. They also don't have to deal with parts that may come DOA. All of that adds up in time spent, whether it's upfront or paid throughout.

    I don't have a Synology, but I know those that do and there's more to it than just ease of build. Frankly, I would like to see them offer something other than an Atom proc.
  • SirGCal - Wednesday, November 19, 2014 - link

    Let's see.. You get any CPU with a matching motherboard and ram setup, a good power-supply (most critical part of any build save the motherboard)... (To match a Syn setup, it would be hard get that cheap in the open market so you're automatically going to get stronger getting components). Get a case and if the mother board doesn't have enough SATA ports for 8 drives, a SATA add-on expansion card. That's it. Shopping, 10 minutes, 30 if you're looking for just something just so. Delivery a few days, same as ordering drives or the Syn...

    Then download a copy of Linux. For ZFS, Ubuntu is one I like alot cause it's very plug and play, lots of GUIs for desktop users and you can do it so fast it's quite amusing.

    And ignorance, that's what the iCloud users had who just had all of their photos stolen... It's secured with Apple on their very secure server so it has to be secure right? So let's take stupid photos we never want someone else seeing, they're safe... That's ignorance also.

    Do what ever ya'll like, but the kid the street saw my setup and built one for his dad out of an old spare computer they had. He just asked me what OS I used on them to setup the array and if he needed other special hardware. He had to buy a SATA expansion card. It took him an hour and a half to do the job to surprise his dad... And that included downloading the software. I believe he's 9 or 10 and I wouldn't consider him geeky or nerdy by any means (I'm a geek or a nerd by trade for example)... I think he's in 5th grade. All I told him was to look up Ubuntu and ZFS for the array for any number of drives. He only had 4 drives of the same size so I said RAIDZ (or RAID 5) was probably the best bang for the buck in that situation. His dad liked it and asked me if I did it for him, I explained it 'no' and he got a new setup of drives in much higher capacity and they built an 8 drive RAIDZ2 together. Cost them drives. But that's the point, it doesn't really take more know-how then googling the how-to and following prompts. Even installing Ubuntu or CentOS is VERY fast (WAY faster then windows). Setup a cron job to email you if the array has an issue, plenty of how-to on that also and it's a one-liner. Install any packages you want (Plex, etc.), setup shared folders on your new array if that's your goal, (Easiest way for windows users would be to setup SAMBA shares) etc. There, I gave you the 101 class on everything you need to know.

    And I've said it many times, the Syn products aren't "bad" per-say, just WAY too expensive.

    Spare time worth more then $750, even at $200/hour, I want to make the millions they do.. cause that's what it all comes down to. To bring home $200/hour, after taxes (to pay for the product) in the US, you have to make roughly a million a year with 40/hour weeks. Freakin kick-butt! OKok, so you're very slow and careful and want to do it just perfect and it's going to take you a whole 8 hour day... $93.75/hour... or a paycheck gross of roughly $400k/year (actually more, I was letting you take home 50% when it's more like 40%). Takes you 2 whole days? Still $46.86/hour... Still a few hundred grand a year....

    This should be somewhere around $500-600 empty... But competition is so slim, they can charge whatever they want.

Log in

Don't have an account? Sign up now