Testbed Setup and Testing Methodology

The rising popularity of USB 3.1 (both Gen 1 and Gen 2) Type-C direct-attached storage (DAS) devices and the upcoming Thunderbolt 3 DAS units made it clear that I had to work on some updates to our direct-attached storage testbed. Originally based on the Haswell platform, the DAS testbed used a Thunderbolt 2 PCIe add-on card and the USB 3.0 ports hanging off the PCH. For a brief while, I also added USB 3.1 Gen 2 Type-A and Type-C PCIe cards to evaluate a few DAS units.

The introduction of Skylake has been quite interesting from the viewpoint of fast local storage. While the 100-series chipset doesn't have native USB 3.1 Gen 2 support, it does have plenty of high-speed PCIe 3.0 lanes that enable high-speed bridges to other protocols. Motherboard vendors have decided to enable USB 3.1 on entry-level boards with an ASMedia bridge chip. However, premium boards can be equipped with Intel's own Alpine Ridge controller. As mentioned in the previous section, Thunderbolt 3 and Intel's Apline Ridge are interesting for a few reasons:

  • In addition to Thunderbolt 3, Alpine Ridge also integrates a USB 3.1 Gen 2 (10 Gbps) host controller
  • Thunderbolt 3 works over a Type-C interface, and supports a couple of additional protocols - USB 3.1 Gen 2 and DisplayPort 1.2

Considering these aspects, it made sense to migrate to Skylake for our DAS testbed. In particular, I looked out for a board with Alpine Ridge integrated. Ian published the review of the GIGABYTE Z170X-UD5 TH, and it turned out that the board perfectly fit the requirements. Note the DisplayPort output and PCIe 3.0 x 4 lanes from the Z170 PCH getting into the Alpine Ridge controller before producing two Type-C ports that can be used as '2x Intel Thunderbolt 3' or '2x DisplayPort' or '2x USB Type-C'.

Intel provided us with a sample of the Core i5-6600K to use in the board. G.Skill also came forward with four 8GB DDR4 DIMMs to give the testbed 32GB of DRAM (the same as our Haswell-based testbed)

The Corsair Carbide Series Air 540 chassis in our Haswell-based testbed has been great in terms of footprint, ventilation and easy access to components. Two hot-swap internal SATA slots turned out to be a boon for quick secure erases of SSDs as well as benchmarking of internal HDDs meant for NAS usage in the single-disk mode. However, this unintended usage model (I wasn't planning on doing this frequently when I first opted for the Corsair Air 540) was a bit of a hassle, since one of the chassis sides had to be dismounted to access the hot-swap slots. I wanted to address this issue in the new testbed.

In the lookout for a ATX chassis for the new testbed, I had three main requirements:

  • Hot-swap bays accessible without the need to open up the unit (similar to the drive slots in hot-swap NAS units)
  • Portability in terms of being easy to shift from one location in the lab to another (something I realized as important when trying to test daisy chaining with a Thunderbot 2 DAS unit last year)
  • Cubical footprint with horizontal motherboard orientation in order to better fit in a workbench and enable easy swapping out of PCIe cards in the future

The Cooler Master HAF XB EVO perfectly fit our requirements. The two X-Dock bays fulfilled our need for hot-swap bays for both 3.5" and 2.5" drives. Since the unit is marketed as a LAN box, it has two rigid carry handles on the side panels to enable portability. The unit can also easily serve as a testbench. Only the top cover (held in place by two screws at the back) needs to be removed in order to access the PCIe cards. The PSU slot also extends slightly out, enabling easier cable management inside the chassis. With plenty of additional drive slots in addition to the X-Dock, it was a no-brainer to go with the Cooler Master HAF XB EVO.

We have traditionally gone with the chassis vendor for the PSU also in our testbeds. Cooler Master suggested the fully modular V750 for use in our system.

Even though a 750W PSU is an overkill for a system with no discrete GPUs, the rating makes sure that we have the option in the future. The fully modular nature also helped greatly in cable management.

In addition to the above, we made use of a few components that were salvaged from earlier reviews / unused components from previous builds - a Corsair Hydro Series H105 liquid CPU cooler, a Samsung SM951 NVMe PCIe 3.0 x4 SSD for the boot drive, and an Intel 730 series 480 GB SSD and a Corsair Neutron XT 480 GB SSD for use as staging drives for temporary data. The gallery below provides some more pictures from our build process.

Evaluation of DAS units (both Thunderbolt 3-based and USB 3.x-based ones) on Windows is being done with the testbed outlined in the table below.

AnandTech DAS Testbed Configuration
Motherboard GIGABYTE Z170X-UD5 TH ATX
CPU Intel Core i5-6600K
Memory G.Skill Ripjaws 4 F4-2133C15-8GRR
32 GB ( 4x 8GB)
DDR4-2133 @ 15-15-15-35
OS Drive Samsung SM951 MZVPV256 NVMe 256 GB
SATA Devices Corsair Neutron XT SSD 480 GB
Intel SSD 730 Series 480 GB
Add-on Card None
Chassis Cooler Master HAF XB EVO
PSU Cooler Master V750 750 W
OS Windows 10 Pro x64
Thanks to Cooler Master, GIGABYTE, G.Skill and Intel for the build components

Our direct-attached storage testing involves artificial benchmarks (ATTO and CrystalDiskMark) as well as real-world data transfer scenarios (photographs, videos and documents). In addition, we run the PCMark 8 Storage Bench for select multimedia editing workloads. Finally, for simultaneous multi-target testing (as in, multiple drives in a JBOD, or, two or more daisy-chained systems), we utilize Iometer to get an idea of the total performance.

The Nuts and Bolts of Thunderbolt 3 Direct-Attached Storage Performance
Comments Locked

60 Comments

View All Comments

  • DanNeely - Thursday, April 14, 2016 - link

    Were there any issues with video quality/reliability in your daisy chaining test?

    I'm guessing not since you didn't come close to saturating the bus in this test, but I'd be really interested in seeing what happens if you ever get to play with something like a 10bay SSD enclosure and external GPU that could devour most of the bandwidth for what ever they're running.
  • ganeshts - Thursday, April 14, 2016 - link

    The DisplayPort lanes are muxed together with the PCIe lanes, and, if any throttling were to happen, it would be on the PCIe lanes, and not the DisplayPort ones.

    But, yes, we were way short of saturating the link because we were not equipped properly to test that aspect.
  • DanNeely - Thursday, April 14, 2016 - link

    Ok. I didn't know the DP lanes were given explicit preference in the MUXing; but I suppose that in general it's probably the right way to go.
  • repoman27 - Thursday, April 14, 2016 - link

    I just wanted to clarify that Thunderbolt supports multiple signaling modes over a single port via hardware muxing, however, when operating in Thunderbolt signaling mode, it uses protocol converters and crossbar switches to mux at the packet level. So the Thunderbolt mode is more like iSCSI or other technologies that encapsulate and transport data streams over IP networks alongside other packets. The encapsulation that Thunderbolt performs is incredibly lightweight, though, and Intel even refers to it as a "meta protocol".

    Thunderbolt seems to have fairly solid mechanisms in place for guaranteeing timely delivery of packets that are part of isochronous data streams such as DisplayPort. OG Thunderbolt did have some issues with USB audio adapters, as it had no way of knowing that the PCIe packets destined for the USB host controller in the device were particularly time sensitive.

    Bear in mind that the signaling rate of a Thunderbolt PHY is considerably higher than the versions of PCIe or DisplayPort that it's carrying. Also, it can strip out all the bit-stuffing that is normally used to maintain a constant DisplayPort data rate and just send the packets carrying actual data. Or it can essentially bit-stuff with PCIe packets instead.

    And one last niggle, Thunderbolt cables are really nothing like regular DisplayPort cables aside from sharing the miniDP connector. They're active and have four full-duplex signaling lanes, whereas DP cables are generally passive and only support half-duplex lanes for the main link.
  • DanNeely - Friday, April 15, 2016 - link

    You say USB Audio was a problem with the original generation, does that mean it's not a problem in the 2nd/3rd generation? If so, how was it fixed: Throwing more bandwidth at the problem, or by doing usb packet inspection to ID and prioritize the usb audio stream?

    Also, am I right in assuming a TB dock with audio out would be using a built in usb audio device?
  • repoman27 - Saturday, April 16, 2016 - link

    Here's Anand's description of the original problem: http://www.anandtech.com/show/4832/the-apple-thund...

    Honestly, I have no idea if the issue with the Promise Pegasus R6 drives (one of the very first Thunderbolt devices to make it to market) was ever fully resolved. In the meantime, Apple has released updates to their EFI, Thunderbolt host, device and cable firmware (yup, even the cables have firmware), and USB host controller and audio device drivers. If I had to guess, making the USB host controller / audio device drivers Thunderbolt aware and capable of isochronous bandwidth reservation (a la FireWire) might have solved the problem. However, Anand's conclusions about the root cause are different than mine, so I could be way off base.

    And yes, AFAIK Thunderbolt docks all use USB devices in some form or another for audio I/O.
  • danbob999 - Thursday, April 14, 2016 - link

    "the street price of $378 sounds reasonable"

    Sorry but no, a hard drive case is not worth $378. Wake me up when it cost less than $30.
  • NCM - Thursday, April 14, 2016 - link

    On the off chance that you're just ignorant, not a troll, I'll point out that it's not a "hard drive case." This enclosure holds two drives, it provides a selection of hardware RAID options, and it has very high speed connectivity via Thunderbolt 3. It's also one of the first products of of it kind. Each one of those things adds cost.

    Yes it's not cheap, but this kind of product will become less expensive over time. For comparison purposes we have a bunch of 2.5" drive RAID enclosures at the office that run about $270 each (empty), see http://www.bhphotovideo.com/c/product/882789-REG/C...
  • danbob999 - Thursday, April 14, 2016 - link

    This is a 2x3.5" case. It's too bulky to be useful for 2.5" SSDs. It's too expensive and no faster and a plain USB3 case when you put two 3.5" HDDs.
    I wasn't trolling, I seriously don't see any use case for it. The fact that it needs a fan to operate just make it worse.
  • Guspaz - Thursday, April 14, 2016 - link

    Can you point out a 2x3.5" USB 3 hard drive enclosure with hardware RAID support for under $30? I get that you think $378 is too much, but $30 seems far more unreasonable a price than $378.

Log in

Don't have an account? Sign up now