Single Client Performance - CIFS, NFS and iSCSI

The single client CIFS performance of the QNAP TS-EC1279U-RP was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results

QNAP TS-EC1279U-RP CIFS Performance - Windows

We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.

QNAP TS-EC1279U-RP iSCSI Performance - Windows

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o async,hard,proto=tcp,noac

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o directio

Note that we have not tried to optimize NFS performance for the NAS. In order to keep things as uniform as possible, we will just look at comparative performance on the basis of the default mounting options combined with the above (which are set to avoid caching effects on the client side). The scripts available here were run on IOMeter. The graph below presents the various results.

QNAP TS-EC1279U-RP Linux Performance

 

Testbed Setup and Testing Methodology Multi-Client Performance - CIFS
Comments Locked

23 Comments

View All Comments

  • Evadman - Wednesday, May 1, 2013 - link

    Why is this so expensive for the performance, and why is single client performance so bad? Granted I deal with actual enterpise class SAN devices from EMC and the like, but even my ~4 year old personal server can beat this box. My crappy home server is a 20 rotational Hitachi 3 TB GST Deskstar 0S03230 disks in RAID 60, a E5200 CPU and an Adaptec 52445 running on MS Server 2008, not even close to being decent for enterprise level. Besides the disks, it cost under a grand and will max out a quadlinked 4gbps connection with one client, I don't need to add 3 or 4 as your graphs show that this box needs. There is no excuse for a 20 rotational disk device to beat this 12 disk SSD NAS/SAN before hitting the network limit. I should get a dozen SSD's and a 10 gig switch and see what my crappy box can do just for kicks. *makes notes to see if a spare switch can be found in the office*
  • ganeshts - Wednesday, May 1, 2013 - link

    The single client performance is for a single client with a 1 GbE link (so it can't max out a 4GbE link obviously). Client machines usually have only a single GbE port.

    Our multi-client graphs show performance with multiple clients and indicate limitation because of the network link bandwidth on the NAS side
  • Evadman - Thursday, May 2, 2013 - link

    I must be misreading the graphs being presented then. This real world graph: http://images.anandtech.com/doci/6922/qnap_ts1279u... shows 5 clients, each at ~20MB/s for a total of 80 MB/s. Theoretical maximum is 125 MB/s, Adding the control data to the payload of the frame, and you should have about 97.5% data. So it looks like it taking more than 5 clients to get to the 1gbps limit. On the single client CIFS graph here: http://images.anandtech.com/graphs/graph6922/54437... only 2 of the performance benchmarks appear network limited at 123 MB/s. Office Productivity is low at 25-28 MB/s, as that is probably what a small business is going to be doing the most of. Is this a client/CIFS issue and not a NAS/SAN issue?

Log in

Don't have an account? Sign up now