Multi-Client Performance - CIFS

We put the QNAP TS-EC1279U-RP through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. We configured the network links of the NAS in two ways. In the first configuration, two of the ports were link aggregated (and the other two teamed and configured as active backup). In the second configuration, all four links were link aggregated as a single 4 Gb link. The following graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Selected metrics from the evaluation of the ioSafe N2 are available here for the dual teaming case and here for the quad teaming case.

We don't have any comparison graphs for these because this is the first unit we are evaluating using SSDs (the 120 GB OCZ Vector units). The previous tower form factor NAS units were evaluated using the Western Digital 4TB RE disks. We haven't graphed the results from the sequential 50% read tests since the aggregate bandwidth figures put out by IOMeter seem implausible (even though the standalone read and write bandwidth figures seem to make sense).

Comparing the sequential 100% reads performance between the dual and quad teaming cases, we find that the performance is limited by the network links. In the other cases, we see that it might be possible to extract even more performance out of the unit by having more clients accessing the unit simultaneously.

Single Client Performance - CIFS, NFS and iSCSI Encryption Support Evaluation
Comments Locked

23 Comments

View All Comments

  • Evadman - Wednesday, May 1, 2013 - link

    Why is this so expensive for the performance, and why is single client performance so bad? Granted I deal with actual enterpise class SAN devices from EMC and the like, but even my ~4 year old personal server can beat this box. My crappy home server is a 20 rotational Hitachi 3 TB GST Deskstar 0S03230 disks in RAID 60, a E5200 CPU and an Adaptec 52445 running on MS Server 2008, not even close to being decent for enterprise level. Besides the disks, it cost under a grand and will max out a quadlinked 4gbps connection with one client, I don't need to add 3 or 4 as your graphs show that this box needs. There is no excuse for a 20 rotational disk device to beat this 12 disk SSD NAS/SAN before hitting the network limit. I should get a dozen SSD's and a 10 gig switch and see what my crappy box can do just for kicks. *makes notes to see if a spare switch can be found in the office*
  • ganeshts - Wednesday, May 1, 2013 - link

    The single client performance is for a single client with a 1 GbE link (so it can't max out a 4GbE link obviously). Client machines usually have only a single GbE port.

    Our multi-client graphs show performance with multiple clients and indicate limitation because of the network link bandwidth on the NAS side
  • Evadman - Thursday, May 2, 2013 - link

    I must be misreading the graphs being presented then. This real world graph: http://images.anandtech.com/doci/6922/qnap_ts1279u... shows 5 clients, each at ~20MB/s for a total of 80 MB/s. Theoretical maximum is 125 MB/s, Adding the control data to the payload of the frame, and you should have about 97.5% data. So it looks like it taking more than 5 clients to get to the 1gbps limit. On the single client CIFS graph here: http://images.anandtech.com/graphs/graph6922/54437... only 2 of the performance benchmarks appear network limited at 123 MB/s. Office Productivity is low at 25-28 MB/s, as that is probably what a small business is going to be doing the most of. Is this a client/CIFS issue and not a NAS/SAN issue?

Log in

Don't have an account? Sign up now