AnandTech DAS Suite - Benchmarking for Performance Consistency

Our testing methodology for DAS units takes into consideration the usual use-case for such devices. The most common usage scenario is transfer of large amounts of photos and videos to and from the unit. Other usage scenarios include the use of the DAS as a download or install location for games and importing files directly off the DAS into a multimedia editing program such as Adobe Photoshop. Some users may even opt to boot an OS off an external storage device.

The AnandTech DAS Suite tackles the first use-case. The evaluation involves processing three different workloads:

  • Photos: 15.6 GB collection of 4320 photos (RAW as well as JPEGs) in 61 sub-folders
  • Videos: 16.1 GB collection of 244 videos (MP4 as well as MOVs) in 6 sub-folders
  • BR: 10.7 GB Blu-ray folder structure of the IDT Benchmark Blu-ray

Each workload's data set is first placed in a 25GB RAM drive, and a robocopy command is issued to transfer it to the DAS under test (formatted in NTFS). Upon completion of the transfer (write test), the contents from the DAS are read back into the RAM drive (read test). This process is repeated three times for each workload. Read and write speeds, as well as the time taken to complete each pass are recorded. Bandwidth for each data set is computed as the average of all three passes.

Blu-ray Folder Read

The write workloads see the Extreme PRO v2 come out slightly better than the WD_BLACK P50 using the Haswell testbed. On the reads, we see the Hades Canyon / eGFX enclosure turning out to be better - this can be attributed in part to the capabilities of the testbed itself, rather than the PCIe tunneling chain. In any case, we don't see any significant gulf in the numbers between the different units as long as the observations are made within the USB SuperSpeed 10Gbps or USB SuperSpeed 20Gbps host configurations.We also instrumented our evaluation scheme for determining performance consistency.

Performance Consistency

Aspects influencing the performance consistency include SLC caching and thermal throttling / firmware caps on access rates to avoid overheating. This is important for power users, as the last thing that they want to see when copying over 100s of GB of data is the transfer rate going down to USB 2.0 speeds.

In addition to tracking the instantaneous read and write speeds of the DAS when processing the AnandTech DAS Suite, the temperature of the drive was also recorded at the beginning and end of the processing. In earlier reviews, we used to track the temperature all through. However, we have observed that SMART read-outs for the temperature in NVMe SSDs using bridge chips end up negatively affecting the actual transfer rates. To avoid this problem, we have restricted ourselves to recording the temperature at either end of the actual workloads set. The graphs below present the recorded data.

Performance Consistency and Thermal Characteristics

The first three sets of writes and reads correspond to the photos suite. A small gap (for the transfer of the video suite from the internal SSD to the RAM drive) is followed by three sets for the video suite. Another small RAM-drive transfer gap is followed by three sets for the Blu-ray folder. An important point to note here is that each of the first three blue and green areas correspond to 15.6 GB of writes and reads respectively. There is no issue with thermal throttling - even in the fastest configuration, both the P50 and Extreme PRO v2 show an increase of less than 5C after the workload processing. The P50 seems to have slightly better thermal performance for this workload set.

Synthetic Benchmarks - ATTO and CrystalDiskMark PCMark 10 Storage Bench - Real-World Access Traces
Comments Locked

81 Comments

View All Comments

  • Eric_WVGG - Tuesday, October 6, 2020 - link

    On a related note, I would love to see you guys do some kind of investigation into why we're five years into this standard and one still cannot buy an actual USB-C hub (i.e. not a port replicator).
  • hubick - Tuesday, October 6, 2020 - link

    A 3.2 hub with gen 2 ports and a 2x2 uplink would be cool!

    I wanted a 10gbps / gen 2 hub and got the StarTech HB31C3A1CS, which at least has a USB-C gen 2 uplink and a single USB-C gen 2 port (plus type A ports). Don't know if you can do any better than that right now.
  • repoman27 - Tuesday, October 6, 2020 - link

    Although it's still not exactly what you're looking for, I've tried (unsuccessfully) to get people to understand what a unicorn the IOGEAR GUH3C22P is. Link: https://www.iogear.com/product/GUH3C22P

    It's a 5-port USB3 10Gbps hub with a tethered USB Type-C cable on the UFP (which supports up to 85W USB PD source), two (2!) downstream facing USB Type-C ports (one of which supports up to 100W USB PD sink), and two USB Type-A ports (one of which supports up to 7.5W USB BC).
  • serendip - Thursday, October 8, 2020 - link

    No alt mode support like for DisplayPort. I haven't found a portable type-C hub that supports DisplayPort alt mode over downstream type-C ports although some desktop docks support it.
  • stephenbrooks - Tuesday, October 6, 2020 - link

    What if I want a USB 20Gbps port on the front of my computer?

    Can I get a USB-C front panel and somehow connect the cable internally to the PCI-E USB card?
  • abufrejoval - Tuesday, October 6, 2020 - link

    I am building hyperconvergent clusters for fun and for work, the home-lab one out of silent/passive 32GB RAM, 1TB SATA-SSD J5005 Atoms, the next iteration most likely from 15Watt-TDP-NUCs, an i7-10700U with 64GB RAM, 1TB NVMe SSD in testing.

    Clusters need short-latency, high-bandwidth interconnects, Infiniband is a classic in data centers, but NUCs offer 1Gbit Ethernet pretty much exclusively, Intel struggling to do 2.5Gbit there while Thunderbolt and USB3/4 could do much better. Only they aren’t peer-to-peer and a TB 10Gbase-T adapter sets you back further than the NUC itself, while adding lots of latency and TCP/IP, while I want RDMA.

    So could we please pause for a moment and think on how we can build fabrics out of USB-X? Thunderbolt/USB4 is already about PCIe lanes, but most likely with multi-root excluded to maintain market segmentation and reduce validation effort.

    I hate how the industry keeps going to 90% of something really useful and then concentrating on 200% speed instead of creating real value.
  • repoman27 - Wednesday, October 7, 2020 - link

    Uh, Thunderbolt and USB4 are explicitly designed to support host-to-host communications already. OS / software support can be a limiting factor, but the hardware is built for it.

    Existing off-the-shelf solutions:
    https://www.dataonstorage.com/products-solutions/k...
    https://www.gosymply.com/symplyworkspace
    https://www.areca.com.tw/products/thunderbolt-8050...
    http://www.accusys.com.tw/T-Share/

    IP over Thunderbolt is also available:
    https://thunderbolttechnology.net/sites/default/fi...™%20Networking%20Bridging%20and%20Routing%20Instructional%20White%20Paper.pdf
    https://support.apple.com/guide/mac-help/ip-thunde...
  • repoman27 - Wednesday, October 7, 2020 - link

    Stupid Intel URL with ™ symbol. Let's try that again:
    https://thunderbolttechnology.net/sites/default/fi...
  • abufrejoval - Wednesday, October 7, 2020 - link

    Let me tell you: You just made my day! Or more likely one or two week-ends!

    Not being a Mac guy, I had completely ignored Thunderbolt for a long time and never learned that it supported networking natively. From the Intel docs it looks a bit similar to Mellanox VPI and host-chaining: I can use 100Gbit links there without any switch to link three machines in a kind of “token ring” manner for Ethernet (these are hybrid adapters that would also support Infiniband, but drivers support is only support Ethernet for host-chaining). Unfortunately the effective bandwidth is only around 35GByte/s for direct hops and slows to 16GByte/s once it has to pass through another host: Not as much of an upgrade over 10Gbase-T as you’d hope for: I never really got into testing latencies, which is where the Infiniband personality of those adapters should shine.

    And that’s where with TB I am hoping for significant improvements over Ethernet apart from native 40Gbit/s speed: Just right for Gluster storage!

    I also used to try to get Ethernet over fiber-channel working years ago, when they were throwing out 4Gbit adapters in the data center, but even if it was specified as a standard, Ethernet over fiber never got driver support and at the higher speeds the trend went the other direction.

    So I’ll definitely try to make the direct connection over TB3 work: CentOS8 should have kernel support for TB networking and the best news is that it doesn’t have to wait for TB4, but should work with TB3, too.

    I’ve just seen what seemed like an incredibly cheap 4-way TB switch recommended by Anton Shilov on the TomsHardware side of this enterprise, which unfortunately is only on pre-order for now (https://eshop.macsales.com/shop/owc-thunderbolt-hu... but supposed to support TB networking. Since the NUCs are single port TB3 only, that should still do the trick and be upgradable to TB4 for just around $150… The 5Gbit USB3 Aquantia NIC wasn’t much cheaper and even the 2.5Gbit USB3 NIC are still around $40.

    Exciting, exciting all that: Thank you very much for those links!
  • abufrejoval - Wednesday, October 7, 2020 - link

    ...except... I don't think that "switch" will be supporting multiple masters, same as USB.

    If it did, Intel would have shot themselves in the foot: 40Gbit networking on NUCs and laptops with little more than passive ables, that's almost as bad as adding a system management mode on 80486L and finding that it can be abused to implement a hypervisor (Mendel and Diane started VMware with that trick).

    Yet that's exactly what consumers really thirst for.

Log in

Don't have an account? Sign up now