Multi-Client Access - NAS Environment

We configured three of the Seagate Enterprise NAS HDD drives in a RAID-5 volume in the QNAP TS-EC1279U-SAS-RP. A CIFS share in the volume was subject to some IOMeter tests with access from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Some of the interesting aspects from our IOMeter benchmarking run are available here.

Seagate Enterprise NAS HDD Multi-Client CIFS Performance - 100% Sequential Reads

 

Seagate Enterprise NAS HDD Multi-Client CIFS Performance - Max Throughput - 50% Sequential Reads

 

Seagate Enterprise NAS HDD Multi-Client CIFS Performance - Random 8K - 70% Reads

 

Seagate Enterprise NAS HDD Multi-Client CIFS Performance - Real Life - 60% Random 65% Reads

We see that the sequential accesses are still limited by the network link, but, this time, on the NAS side. On the other hand, our random access tests show markedly better performance for enterprise drives. From a viewpoint of the average access times, the Seagate Enterprise Capacity v4, Seagate Enterprise NAS HDD and the HGST Ultrastar He6 all fall in the same category. Amongst these three, the Enterprise NAS HDD doesn't win out on any particular benchmark. However, that is to be expected - the other drives that are in the comparison list are targeted towards true enterprise-class / datacenter applications.

Single Client Access - NAS Benchmarks RAID-5 Benchmarking - Miscellaneous Aspects
Comments Locked

51 Comments

View All Comments

  • Supercell99 - Friday, December 12, 2014 - link

    Most cloud providers are very slow if you use their storage solutions based on HDD. I am referring to in-house shops that run Dell/HP with Vsphere or Oracle DB's. Anything needing a lot of storage and decent I/O. The price difference to make a drive with SAS interface and SATA is very minimal, but the performance difference can be big when under a lot of simultaneous requests.
  • MrSpadge - Thursday, December 11, 2014 - link

    Sure, SAS enterprise HDDs are faster.. but at QD > 32 any HDD is just crawling. For such high loads you really want your hot data to be on flash.
  • hlmcompany - Thursday, December 11, 2014 - link

    Exactly. That's why real Enterprise Storage manufacturers, like HGST, provide a host of flash storage, including HH-HL, high capacity, PCI-E storage or low capacity flash caching for large HDD farms.
  • shodanshok - Thursday, December 11, 2014 - link

    WD Red drivers seems to have some serious performance bottleneck, even taking into account the slow (5400 RPM) spindle speed.

    They seem to suffer from an underpowered controller and simplified firmware, as it seem to be unable to coalesce multiple 512B writes in one 4K sector. For example, see how bad the WD Red fares in HD Tach 512B random write test:

    WD RED: 25.475 ms
    Ent NAS: 6.646 ms

    While the enterprise NAS has a larger cache (128 MB vs 64 MB), it is difficult that the cache alone can account for such a large performance improvement in a random write scenario.

    On the other side, the random read test is in-line with the different spindle speed (~18.5 ms vs ~14.5 ms)

    @ganesh: any possibility to ping WD about that?
  • theKai007 - Thursday, December 11, 2014 - link

    Intel announced the Intel IoT Platform, a reference model end-to-end designed to unify ans simplify connectivity and security for the Internet of Things. http://bit.ly/1yCMSnB
  • BPB - Thursday, December 11, 2014 - link

    Are any of these suitable for DVR-type applications? I'd like to get a bigger drive for my WMC setup. I've been using the WD AV-GP series since they are geared towards non-stop I/O in DVR-type usage.
  • Visual - Monday, December 15, 2014 - link

    Not at all. Raid helps distribute the data across drives and get some speedup at most linear to the number of drives, but random access is still random access, and is still slow.

    What romrunning "invented" is a software stack that remaps sectors to make random logical access be physically sequential. I believe some company, maybe Fusion IO, did have something like this, though now that I look for it I can not find anything that is not Flash-based.

    The idea can definitely work pretty well for speeding up random writes, but for reads it needs some quite good analysis and statistics about what the commonly read sequences are and does not seem too feasible. Maybe that's why they dropped it and use flash caches.
  • Visual - Monday, December 15, 2014 - link

    And why this did not appear as a reply to the post i clicked 'reply' on (in a new tab)? Anandtech... get some web devs with a brain... it is not rocket science.
  • shodanshok - Monday, December 15, 2014 - link

    Modern copy-on-write filesystems as ZFS and BTRFS (and, to a limited extent, even classical filesystems as EXT4 and XFS) do exactly that. They transform random writes in sequential one, using the available space similar to a circular log buffer.

    For write-intensive, read-insensitive workload they are a great choice, but for some common scenario (eg: databases) they performs quite poorly. Moreover, the resulting files are often very, very much fragmented, leading to very log read performance (when used on top of spinning disks).

    For more information and some benchmarks:
    - http://www.ilsistemista.net/index.php/linux-a-unix...
    - http://en.wikipedia.org/wiki/Log-structured_file_s...

    Regards.
  • akula2 - Thursday, December 18, 2014 - link

    This review has some misses w.r.t Enterprise segment. Hardware architecture isn't great. Most importantly this implementation isn't suitable to employers with hundreds of employees accessing data from multiple nations. Lastly, based on my five years of experience in deploying NAS solutions in my businesses I observed Seagate drives fail more than their Hitachi counterparts.

Log in

Don't have an account? Sign up now