Multi-Client Performance - CIFS

We put the Netgear ReadyNAS 716 through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. For this benchmark, the two 10GBase-T ports were link aggregated and connected to two ports teamed on the XS712T. The XS712T's SFP+ ports were teamed and connected to the teamed SFP+ ports of the GSM 7352S (to which the rest of the VMs were physically connected). The graphs below present the results. Other interesting aspects from our IOMeter benchmarking run can be found here.

Netgear RN716X Multi-Client CIFS Performance - 100% Sequential Reads

Netgear RN716X Multi-Client CIFS Performance - Max Throughput - 50% Reads

Netgear RN716X Multi-Client CIFS Performance - Random 8K - 70% Reads

Netgear RN716X Multi-Client CIFS Performance - Real Life - 65% Reads

The graphs for the some of the rackmount units we have evaluated earlier are also presented as reference, but do remember that most of them have different number of disks in RAID-5 configuration. The Synology DS1812+ was also benchmarked with hard disks instead of the OCZ Vector SSDs used for the ReadyNAS 716. With speeds reaching close to 800 MBps in RAID-5 for certain access patters, the RN716X lives up to the claim from Netgear of being the fastest desktop NAS. It is possible to obtain even higher bandwidth numbers for specific access patterns by enabling jumbo frames in the network path.

Single Client Performance - CIFS and NFS on Linux Miscellaneous Factors and Final Words
Comments Locked

24 Comments

View All Comments

  • Runiteshark - Wednesday, January 1, 2014 - link

    Some tests being multi-client CIFS. Look at the throughput he's getting on a single client. I'm pushing 180MBps cifs and 200MBps through NFS LAGGing dual 1gs to a single client. Host pushing this data is a 72 bay Supermicro chassis w/ Dual e5-2697v2's, 256GB of RAM, 72 Seagate 5900rpm NAS drives x4 Samsung 840 Pro 512GB SSDs, 3 LSI 2308 controllers, and a single Intel X520-T2 double 10G nic hooked up to an Extreme X670V over twinax with a frame size of 9216. Typical file type are medium size files at roughly 150mb each, copying with 48 threads of rsync.

    One thing that I didn't see in the test bed was the configuration of jumbo frames which definitely changes the characteristics of single client throughput. I'm not sure if you can run large jumbo frames on the Netgear switch.

    If I need 10g which I don't because the disks/proc in the microserver couldn't push much more, I could toss in a double 10G intel adapter for roughly $450.
  • imsabbel - Thursday, January 2, 2014 - link

    Thats because his single client tests only use a single 1 GBit connection on the client side.. I know, its stupid, but the fact that ALL transfor tests are literally limited to something like 995Mbits/s should have given you a clue that Anandtech does strange things with their testing.
  • Runiteshark - Friday, January 3, 2014 - link

    I didn't even see that! What the hell was the point of the test then?
  • Gigaplex - Wednesday, January 1, 2014 - link

    Am I reading this correctly? You used 1GbE, not 10GbE adapters on the test bed? I'd like to see single client speeds using 10GbE.
  • ZeDestructor - Wednesday, January 1, 2014 - link

    6 quad-port NICs + 1 on-board NIC, so 25 gigabit ports split over 25VMs.

    As for single-client speeds, it should be possible to get that using LAGs and is a worthy point to mention, easily possible even with the current setup, although I would like to see some Intel X540 cards in use myself...
  • BMNify - Thursday, January 2, 2014 - link

    hmm am i missing something here ?
    you only use a 6 x Intel ESA I-340 Quad-GbE Port Network Adapter

    as in only using 4 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection ?

    dont get me wrong, its nice to finally get a commercial SOHO type unit that's actually got 10GbE as standard after decades of nothing but antiquated 1GbE cards at reasonable prices but you also NEED that new extra 10GbE card to put in your PC alongside those 10GbE router/switch so this 3K NAS is way to expensive for SOHO masses today alas.
  • ganeshts - Thursday, January 2, 2014 - link

    6x quad ports = 24 1-GbE ports + one onboard 1GbE = 25 GbE in total.
  • BMNify - Thursday, January 2, 2014 - link

    oh right so its 25 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection
  • BMNify - Thursday, January 2, 2014 - link

    it still seems very odd to have a collection of 24 threads over a dual socket 6 core 12 thread test bench with 10GbE router/switch and this 3K NAS with dual "10GbE" card that could be bonded together at both end's and yet AT just test the kit to the 1GbE port bottle neck, and dont even install another dual "10GbE" card in the pc end then try for instance starting several concurrent ffmpeg upscaling and encoding high profile/bitrate 1080P content to UHD over iSCSI etc to the "10GbE" NAS to max out the all the 12 cores/24 threads SIMD or other options to try and push that exclusive "10GbE" connection rather than any old combination of antiquated "1GbE" cards
  • hoboville - Thursday, January 2, 2014 - link

    I hate sounding like a naysayer, but these boxes are so expensive. You can build a system with similar specs for much less under FreeNAS and ZFS (as other commentators have noted). Supermicro makes some great boards, and with the number of case options you get when DIY, expandability is very much an option if you need it further down the road. Then again, alot of the cost comes from 10 gbit NICs which cost a lot.

Log in

Don't have an account? Sign up now