Multi-Client Access - NAS Environment

We put the NAS drives in the QNAP TS-EC1279U-SAS-RP through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Some of the interesting aspects from our IOMeter benchmarking run are linked below:

  1. WD Red Pro
  2. Seagate Enterprise Capacity 3.5" HDD v4
  3. WD Red
  4. Seagate NAS HDD
  5. WD Se
  6. Seagate Terascale
  7. WD Re
  8. Seagate Constellation ES.3
  9. Toshiba MG03ACA400
  10. HGST Ultrastar 7K4000 SAS

WD Red Pro Multi-Client CIFS Performance - 100% Sequential Reads


 

WD Red Pro Multi-Client CIFS Performance - Max Throughput - 50% Reads


 

WD Red Pro Multi-Client CIFS Performance - Random 8K - 70% Reads


 

WD Red Pro Multi-Client CIFS Performance - Real Life - 65% Reads


We see that the sequential accesses are still limited by the network link, but, this time, on the NAS side. On the other hand, our random access tests show markedly better performance for drives such as the Seagate Enterprise Capacity, Seagate Constellation, WD Re, etc. Not only is the total available bandwidth higher, the average response times also go down.

Single Client Access - NAS Benchmarks RAID-5 Benchmarking - Miscellaneous Aspects
Comments Locked

62 Comments

View All Comments

  • dzezik - Friday, September 26, 2014 - link

    that is why we do not use RAID but ZFS. think about it
  • Navvie - Monday, August 18, 2014 - link

    Thanks. Interesting read.
  • colinstu - Saturday, August 9, 2014 - link

    bought 4x 4TB SEs last year, at least I'm not missing out on anything!
  • dzezik - Friday, September 26, 2014 - link

    are you sure You still have Your data on the disk and not random zeros and ones. how can You be sure without daily scrubbing.
  • HollyDOL - Monday, August 11, 2014 - link

    Hi, are the bandwidths in graphs (page 5...) really supposed to be in Mbps (mega-bits per second)? Although it's correct bandwidth unit, the values seem to be really low (fastest tests would be about 30MB/s), the values provided I'd expect to be in MBps for the numbers to correspond...
  • ganeshts - Monday, August 11, 2014 - link

    Thanks for catching it. It is indeed MBps. I have fixed the issue.
  • GrumpyOldCamel - Wednesday, August 13, 2014 - link

    raid5, seriously?

    Why are you not focused on reliability, thankfully I see most of the other commentors are making similar points to mine, where did all the 10^16 and 10^17 drives go?

    Why are we not exited about the newly leaked 10^18 drive?

    When it comes to storage, you can keep size and you can keep speed, Im not interested.
    I just want reliability.
  • Gear8 - Saturday, September 13, 2014 - link

    Where measuring the heating ??? Where degrees Celsius ???
  • dzezik - Friday, September 26, 2014 - link

    Hey. This test setup is wrong. There is on SAS disk but there is no SAS HBA in the list of test setup. according to other tests benchamarks HGST SAS disk is the fastest from this list but it suffers because of poor or very poor controller. this comparison is worth nothing without good SAS HBA. and remember good HBA also increase SATA disk performance. embedded intel controllers are very simple and limited performance. good SAS HBA is about 150$ so it is not a big deal. regards
  • KingSmurf - Wednesday, October 22, 2014 - link

    Just wondering this review states for the WD Se:
    Non-recoverable read errors per bits read < 1 in 10^14 and MTBF of 800k

    while on WD's Specsheet it says for the Se:

    Non-recoverable read errors per bits read < 1 in 10^15 and MTBF of 1 M (800k is the 1 TB only)

    Did WD suddenly change the Spec Sheet - or was this review... let's say less than thorough?

Log in

Don't have an account? Sign up now