Testbed Setup and Testing Methodology

The rising popularity of USB 3.1 (both Gen 1 and Gen 2) Type-C direct-attached storage (DAS) devices and the upcoming Thunderbolt 3 DAS units made it clear that I had to work on some updates to our direct-attached storage testbed. Originally based on the Haswell platform, the DAS testbed used a Thunderbolt 2 PCIe add-on card and the USB 3.0 ports hanging off the PCH. For a brief while, I also added USB 3.1 Gen 2 Type-A and Type-C PCIe cards to evaluate a few DAS units.

The introduction of Skylake has been quite interesting from the viewpoint of fast local storage. While the 100-series chipset doesn't have native USB 3.1 Gen 2 support, it does have plenty of high-speed PCIe 3.0 lanes that enable high-speed bridges to other protocols. Motherboard vendors have decided to enable USB 3.1 on entry-level boards with an ASMedia bridge chip. However, premium boards can be equipped with Intel's own Alpine Ridge controller. As mentioned in the previous section, Thunderbolt 3 and Intel's Apline Ridge are interesting for a few reasons:

  • In addition to Thunderbolt 3, Alpine Ridge also integrates a USB 3.1 Gen 2 (10 Gbps) host controller
  • Thunderbolt 3 works over a Type-C interface, and supports a couple of additional protocols - USB 3.1 Gen 2 and DisplayPort 1.2

Considering these aspects, it made sense to migrate to Skylake for our DAS testbed. In particular, I looked out for a board with Alpine Ridge integrated. Ian published the review of the GIGABYTE Z170X-UD5 TH, and it turned out that the board perfectly fit the requirements. Note the DisplayPort output and PCIe 3.0 x 4 lanes from the Z170 PCH getting into the Alpine Ridge controller before producing two Type-C ports that can be used as '2x Intel Thunderbolt 3' or '2x DisplayPort' or '2x USB Type-C'.

Intel provided us with a sample of the Core i5-6600K to use in the board. G.Skill also came forward with four 8GB DDR4 DIMMs to give the testbed 32GB of DRAM (the same as our Haswell-based testbed)

The Corsair Carbide Series Air 540 chassis in our Haswell-based testbed has been great in terms of footprint, ventilation and easy access to components. Two hot-swap internal SATA slots turned out to be a boon for quick secure erases of SSDs as well as benchmarking of internal HDDs meant for NAS usage in the single-disk mode. However, this unintended usage model (I wasn't planning on doing this frequently when I first opted for the Corsair Air 540) was a bit of a hassle, since one of the chassis sides had to be dismounted to access the hot-swap slots. I wanted to address this issue in the new testbed.

In the lookout for a ATX chassis for the new testbed, I had three main requirements:

  • Hot-swap bays accessible without the need to open up the unit (similar to the drive slots in hot-swap NAS units)
  • Portability in terms of being easy to shift from one location in the lab to another (something I realized as important when trying to test daisy chaining with a Thunderbot 2 DAS unit last year)
  • Cubical footprint with horizontal motherboard orientation in order to better fit in a workbench and enable easy swapping out of PCIe cards in the future

The Cooler Master HAF XB EVO perfectly fit our requirements. The two X-Dock bays fulfilled our need for hot-swap bays for both 3.5" and 2.5" drives. Since the unit is marketed as a LAN box, it has two rigid carry handles on the side panels to enable portability. The unit can also easily serve as a testbench. Only the top cover (held in place by two screws at the back) needs to be removed in order to access the PCIe cards. The PSU slot also extends slightly out, enabling easier cable management inside the chassis. With plenty of additional drive slots in addition to the X-Dock, it was a no-brainer to go with the Cooler Master HAF XB EVO.

We have traditionally gone with the chassis vendor for the PSU also in our testbeds. Cooler Master suggested the fully modular V750 for use in our system.

Even though a 750W PSU is an overkill for a system with no discrete GPUs, the rating makes sure that we have the option in the future. The fully modular nature also helped greatly in cable management.

In addition to the above, we made use of a few components that were salvaged from earlier reviews / unused components from previous builds - a Corsair Hydro Series H105 liquid CPU cooler, a Samsung SM951 NVMe PCIe 3.0 x4 SSD for the boot drive, and an Intel 730 series 480 GB SSD and a Corsair Neutron XT 480 GB SSD for use as staging drives for temporary data. The gallery below provides some more pictures from our build process.

Evaluation of DAS units (both Thunderbolt 3-based and USB 3.x-based ones) on Windows is being done with the testbed outlined in the table below.

AnandTech DAS Testbed Configuration
Motherboard GIGABYTE Z170X-UD5 TH ATX
CPU Intel Core i5-6600K
Memory G.Skill Ripjaws 4 F4-2133C15-8GRR
32 GB ( 4x 8GB)
DDR4-2133 @ 15-15-15-35
OS Drive Samsung SM951 MZVPV256 NVMe 256 GB
SATA Devices Corsair Neutron XT SSD 480 GB
Intel SSD 730 Series 480 GB
Add-on Card None
Chassis Cooler Master HAF XB EVO
PSU Cooler Master V750 750 W
OS Windows 10 Pro x64
Thanks to Cooler Master, GIGABYTE, G.Skill and Intel for the build components

Our direct-attached storage testing involves artificial benchmarks (ATTO and CrystalDiskMark) as well as real-world data transfer scenarios (photographs, videos and documents). In addition, we run the PCMark 8 Storage Bench for select multimedia editing workloads. Finally, for simultaneous multi-target testing (as in, multiple drives in a JBOD, or, two or more daisy-chained systems), we utilize Iometer to get an idea of the total performance.

The Nuts and Bolts of Thunderbolt 3 Direct-Attached Storage Performance
Comments Locked

60 Comments

View All Comments

  • name99 - Thursday, April 14, 2016 - link

    I don't understand this obsession with daisy-chaining. Daisy-chaining is a LOUSY technology. It's been a lousy technology in every damn form it's ever shipped, whether SCSI, ADB, firewire, or thunderbolt. One of the few things USB actually got right from the start was to make it clear on day one that their expansion solution was hubs, not daisy-chaining.

    Why does it suck?
    - It substantially reduces your power-on-off flexibility. This may not matter in a testing lab, but in the real world there are constant reasons why you might want to power a device off. With a hub this is a simple issue; with a daisy-chain it requires considering the implications of everything that is connected, and generally unmounting a bunch of devices then changing the topology.

    - right now when it's all skittles and roses, every thunderbolt device comes with two ports. But as soon as this goes mainstream, the usual attempts at cost-cutting will have one device after another shipping with only one port. And then what happens to your chaining?

    Because USB got this right on day one, USB hubs have always been cheap as dirt. Everybody owns one, and devices that need to present the illusion of daisy-chaining (like keyboards with two USB ports, one for the mouse to connect to; or displays with USB connectors) just stick in a cheap USB hub chip. Because Firewire (and the other specs I mentioned) did NOT get this right, FW hubs never became cheap. Even the FW400 hubs were expensive, and I don't think decent FW800 hubs were EVER produced (when I was looking for them, the best I could find was a pathetic two port hub).

    Instead of cheering how great Thunderbolt daisy-chaining is, you should be considering the reality that, because Intel has insisted on doing things this way (in spite of THIRTY YEARS of evidence that it is a stupid idea) they are likely going to snatch defeat from the jaws of victory. All those thunderbolt-enabled USB C ports will ACTUALLY land up connected to pure USB3.1 hubs, which will in turn, once again, mean that USB3.1 is the only really viable mass market for storage, and these super-high-end storage solutions (and external GPUs, etc) will continue to remain irrelevant to the mass market.
    Nice going Intel --- turns out instruction sets are not the only things you're incapable of handling competently.
  • Klug4Pres - Friday, April 15, 2016 - link

    Enjoyable rant, thanks!
  • Wardrop - Friday, April 15, 2016 - link

    I'm sure Apple, who are obsessed with having a single cable for everything, would have been the ones who pushed Intel to support daisy chaining.

    Daisy chaining isn't a bad idea if implemented properly though. It should be passive to really work, as in, a physically unplugged device should be able to pass through a thunderbolt signal. Like a switch that opens and closes depending on whether the device is powered on or not.
  • galta - Friday, April 15, 2016 - link

    A little bit angrier than I would have expected, but correct in its essence.
    All these weird proprietary interfaces fail for a combination of high costs and lack of scale. All of us - or at least most of us - remember when microchannel was thought to be the future and we all know were it ended.
    As someone said before, Thunderbolt, as well as FireWire in its time, will make sense only for the 15 people who make 4k video editing on a 5k monitor on their Apples.
    The remaining will be more than glad to remain with USB.
  • zodiacfml - Friday, April 15, 2016 - link

    Same here. I never understood daisy chaining. I just dismissed it long ago that some people use the feature.
  • ganeshts - Friday, April 15, 2016 - link

    Daisy chaining is a feature that is available.

    It is not mandatory that it needs to be used.

    Most people can just use a dock and it would have all the types of USB 3.x ports that they need.

    The beauty of Thunderbolt 3 is that it allows for just a single interface in sleek products, and it will have an ecosystem that allows people to pick and choose what interfaces they want in their system when 'docked' - that can't be said for proprietary interfaces developed by system vendors. (though I do agree that Thunderbolt being restricted to Intel-only systems is a bit of an issue in the long run - if AMD manages to claw back to performance parity with mid-range and higher Intel systems)
  • hyno111 - Friday, April 15, 2016 - link

    The ATTO and CrystalDiskMark result for SSD RAID is missing.
  • ganeshts - Friday, April 15, 2016 - link

    My apologies. There was a CMS issue when we updated the HDD results. It is now fixed.
  • epobirs - Saturday, April 16, 2016 - link

    Considering the main bottleneck here is going to be SATA, it seems like the box could have been implemented with USB 3.1 Gen 2 and delivered the same performance at lower cost. Even with two SSDs rather than platter drives, the best throughput after overhead should rarely exceed what USB 3.1 can handle.

    Down the road, a box with slots for, say, four U.2 SSDs, should really utilize Thunderbolt 3's bandwidth while still being small enough to consider portable. THAT would be worth spending a good amount for a professional user, being able to access live data or do very large backups at those speeds in a rig small enough to go on a location shoot comfortably.
  • ganeshts - Saturday, April 16, 2016 - link

    Definitely.. the performance of a single unit is very close to that of the bus-powered SanDisk Extreme 900 we reviewed before. However, this unit is clearly meant to introduce the benefits of Thunderbolt 3 to the market - DisplayPort output, daisy chaining with docks for extra functionality etc. The storage bandwidth from a single unit is not the main focus, as this is one of the first Thunderbolt 3 devices to be introduced. We will soon see high bay-count devices with Thunderbolt 3 at NAB next week - Accusys has already pre-announced a 12-bay one.

Log in

Don't have an account? Sign up now