Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

Note that these are slightly different from what we used to run in our previous NAS reviews. We have also shifted from IOMeter to IOZone for evaluating performance under Linux. The following IOZone command was used to benchmark the shares:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here. These numbers will gain relevance as we benchmark more NAS units with similar configuration.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

Some scenarios exhibit client caching effects, and these are evident in the gallery below.

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Netgear ReadyNAS 716 - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 76 16
Re-Write 76 16
Read 32 120
Re-Read 32 121
Random Read 19 51
Random Write 73 19
Backward Read 19 42
Record Re-Write 743 401
Stride Read 28 82
File Write 76 16
File Re-Write 75 17
File Read 23 84
File Re-Read 22 87

 

Single Client Performance - CIFS and iSCSI on Windows Multi-Client Performance - CIFS
Comments Locked

24 Comments

View All Comments

  • Runiteshark - Wednesday, January 1, 2014 - link

    Some tests being multi-client CIFS. Look at the throughput he's getting on a single client. I'm pushing 180MBps cifs and 200MBps through NFS LAGGing dual 1gs to a single client. Host pushing this data is a 72 bay Supermicro chassis w/ Dual e5-2697v2's, 256GB of RAM, 72 Seagate 5900rpm NAS drives x4 Samsung 840 Pro 512GB SSDs, 3 LSI 2308 controllers, and a single Intel X520-T2 double 10G nic hooked up to an Extreme X670V over twinax with a frame size of 9216. Typical file type are medium size files at roughly 150mb each, copying with 48 threads of rsync.

    One thing that I didn't see in the test bed was the configuration of jumbo frames which definitely changes the characteristics of single client throughput. I'm not sure if you can run large jumbo frames on the Netgear switch.

    If I need 10g which I don't because the disks/proc in the microserver couldn't push much more, I could toss in a double 10G intel adapter for roughly $450.
  • imsabbel - Thursday, January 2, 2014 - link

    Thats because his single client tests only use a single 1 GBit connection on the client side.. I know, its stupid, but the fact that ALL transfor tests are literally limited to something like 995Mbits/s should have given you a clue that Anandtech does strange things with their testing.
  • Runiteshark - Friday, January 3, 2014 - link

    I didn't even see that! What the hell was the point of the test then?
  • Gigaplex - Wednesday, January 1, 2014 - link

    Am I reading this correctly? You used 1GbE, not 10GbE adapters on the test bed? I'd like to see single client speeds using 10GbE.
  • ZeDestructor - Wednesday, January 1, 2014 - link

    6 quad-port NICs + 1 on-board NIC, so 25 gigabit ports split over 25VMs.

    As for single-client speeds, it should be possible to get that using LAGs and is a worthy point to mention, easily possible even with the current setup, although I would like to see some Intel X540 cards in use myself...
  • BMNify - Thursday, January 2, 2014 - link

    hmm am i missing something here ?
    you only use a 6 x Intel ESA I-340 Quad-GbE Port Network Adapter

    as in only using 4 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection ?

    dont get me wrong, its nice to finally get a commercial SOHO type unit that's actually got 10GbE as standard after decades of nothing but antiquated 1GbE cards at reasonable prices but you also NEED that new extra 10GbE card to put in your PC alongside those 10GbE router/switch so this 3K NAS is way to expensive for SOHO masses today alas.
  • ganeshts - Thursday, January 2, 2014 - link

    6x quad ports = 24 1-GbE ports + one onboard 1GbE = 25 GbE in total.
  • BMNify - Thursday, January 2, 2014 - link

    oh right so its 25 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection
  • BMNify - Thursday, January 2, 2014 - link

    it still seems very odd to have a collection of 24 threads over a dual socket 6 core 12 thread test bench with 10GbE router/switch and this 3K NAS with dual "10GbE" card that could be bonded together at both end's and yet AT just test the kit to the 1GbE port bottle neck, and dont even install another dual "10GbE" card in the pc end then try for instance starting several concurrent ffmpeg upscaling and encoding high profile/bitrate 1080P content to UHD over iSCSI etc to the "10GbE" NAS to max out the all the 12 cores/24 threads SIMD or other options to try and push that exclusive "10GbE" connection rather than any old combination of antiquated "1GbE" cards
  • hoboville - Thursday, January 2, 2014 - link

    I hate sounding like a naysayer, but these boxes are so expensive. You can build a system with similar specs for much less under FreeNAS and ZFS (as other commentators have noted). Supermicro makes some great boards, and with the number of case options you get when DIY, expandability is very much an option if you need it further down the road. Then again, alot of the cost comes from 10 gbit NICs which cost a lot.

Log in

Don't have an account? Sign up now