PCMark 10 Storage Bench - Real-World Access Traces

There are a number of storage benchmarks that can subject a device to artificial access traces by varying the mix of reads and writes, the access block sizes, and the queue depth / number of outstanding data requests. We saw results from two popular ones - ATTO, and CrystalDiskMark - in a previous section. More serious benchmarks, however, actually replicate access traces from real-world workloads to determine the suitability of a particular device for a particular workload. Real-world access traces may be used for simulating the behavior of computing activities that are limited by storage performance. Examples include booting an operating system or loading a particular game from the disk.

PCMark 10's storage bench (introduced in v2.1.2153) includes four storage benchmarks that use relevant real-world traces from popular applications and common tasks to fully test the performance of the latest modern drives:

  • The Full System Drive Benchmark uses a wide-ranging set of real-world traces from popular applications and common tasks to fully test the performance of the fastest modern drives. It involves a total of 204 GB of write traffic.
  • The Quick System Drive Benchmark is a shorter test with a smaller set of less demanding real-world traces. It subjects the device to 23 GB of writes.
  •  
  • The Data Drive Benchmark is designed to test drives that are used for storing files rather than applications. These typically include NAS drives, USB sticks, memory cards, and other external storage devices. The device is subjected to 15 GB of writes.
  • The Drive Performance Consistency Test is a long-running and extremely demanding test with a heavy, continuous load for expert users. In-depth reporting shows how the performance of the drive varies under different conditions. This writes more than 23 TB of data to the drive.

Despite the data drive benchmark appearing most suitable for testing direct-attached storage, we opted to run the full system drive benchmark as part of our evaluation flow. Many of us use portable flash drives as boot drives and storage for Steam games. These types of use-cases are addressed only in the full system drive benchmark.

The Full System Drive Benchmark comprises of 23 different traces. For the purpose of presenting results, we classify them under five different categories:

  • Boot: Replay of storage access trace recorded while booting Windows 10
  • Creative: Replay of storage access traces recorded during the start up and usage of Adobe applications such as Acrobat, After Effects, Illustrator, Premiere Pro, Lightroom, and Photoshop.
  • Office: Replay of storage access traces recorded during the usage of Microsoft Office applications such as Excel and Powerpoint.
  • Gaming: Replay of storage access traces recorded during the start up of games such as Battlefield V, Call of Duty Black Ops 4, and Overwatch.
  • File Transfers: Replay of storage access traces (Write-Only, Read-Write, and Read-Only) recorded during the transfer of data such as ISOs and photographs.

PCMark 10 also generates an overall score, bandwidth, and average latency number for quick comparison of different drives. The sub-sections in the rest of the page reference the access traces specified in the PCMark 10 Technical Guide.

Booting Windows 10

The read-write bandwidth recorded for each drive in the boo access trace is presented below.

Windows 10 Boot

USB SuperSpeed 20Gbps doesn't matter for the boot process - in fact, the Extreme Portable SSD v2 using the SN550E behind a USB 3.2 Gen 2 (10 Gbps) bridge scores better than the P50 and the Extreme PRO v2 on the Haswell testbed in this benchmark.

Creative Workloads

The read-write bandwidth recorded for each drive in the sacr, saft, sill, spre, slig, sps, aft, exc, ill, ind, psh, and psl access traces are presented below.

Startup - Adobe Acrobat

These workloads also seem to get little benefit from the move to USB SuperSpeed 20Gbps. In almost all cases, the SanDisk Extreme v2 performs better than the Extreme PRO v2 on the same testbed. The P50 seems to suffer from some handicaps for these types of workloads.

Office Workloads

The read-write bandwidth recorded for each drive in the exc and pow access traces are presented below.

Usage - Microsoft Excel

The trend seen in earlier PCMark 10 storage workloads repeats here - the Extreme edging out the Extreme PRO v2 slightly, while the P50 lags well behind.

Gaming Workloads

The read-write bandwidth recorded for each drive in the bf, cod, and ow access traces are presented below.

Startup - Battlefield V

This section finally sees the P50 live up to its billing as a game drive - in the Call of Duty loading times, it finally scores almost as well as the Extreme PRO v2, and the Extreme v2 lags well behind. However, overall, the Extreme PRO v2 seems to be a better fit for gaming workloads.

Files Transfer Workloads

The read-write bandwidth recorded for each drive in the cp1, cp2, cp3, cps1, cps2, and cps3 access traces are presented below.

Duplicating ISOs (Read-Write)

In most workloads, the USB SuperSpeed 20Gbps drives come out on top, with the Extreme PRO v2 slightly edging out the P50.

Overall Scores

PCMark 10 reports an overall score based on the observed bandwidth and access times for the full workload set. The score, bandwidth, and average access latency for each of the drives are presented below.

Full System Drive Benchmark Bandwidth (MBps)

The WD_BLACK P50 scores are a bit behind the Extreme PRO v2 and the Extreme v2 when considered on a testbed-by-testbed basis. When limited by the host port to 10Gbps, the gulf is not significant, though. It may just be that Western Digital has tweaked the firmware of the P50 to cater to gaming workloads alone.

AnandTech DAS Suite - Benchmarking for Performance Consistency Worst-Case Consistency, Thermals, and Power Consumption
Comments Locked

81 Comments

View All Comments

  • Eric_WVGG - Tuesday, October 6, 2020 - link

    On a related note, I would love to see you guys do some kind of investigation into why we're five years into this standard and one still cannot buy an actual USB-C hub (i.e. not a port replicator).
  • hubick - Tuesday, October 6, 2020 - link

    A 3.2 hub with gen 2 ports and a 2x2 uplink would be cool!

    I wanted a 10gbps / gen 2 hub and got the StarTech HB31C3A1CS, which at least has a USB-C gen 2 uplink and a single USB-C gen 2 port (plus type A ports). Don't know if you can do any better than that right now.
  • repoman27 - Tuesday, October 6, 2020 - link

    Although it's still not exactly what you're looking for, I've tried (unsuccessfully) to get people to understand what a unicorn the IOGEAR GUH3C22P is. Link: https://www.iogear.com/product/GUH3C22P

    It's a 5-port USB3 10Gbps hub with a tethered USB Type-C cable on the UFP (which supports up to 85W USB PD source), two (2!) downstream facing USB Type-C ports (one of which supports up to 100W USB PD sink), and two USB Type-A ports (one of which supports up to 7.5W USB BC).
  • serendip - Thursday, October 8, 2020 - link

    No alt mode support like for DisplayPort. I haven't found a portable type-C hub that supports DisplayPort alt mode over downstream type-C ports although some desktop docks support it.
  • stephenbrooks - Tuesday, October 6, 2020 - link

    What if I want a USB 20Gbps port on the front of my computer?

    Can I get a USB-C front panel and somehow connect the cable internally to the PCI-E USB card?
  • abufrejoval - Tuesday, October 6, 2020 - link

    I am building hyperconvergent clusters for fun and for work, the home-lab one out of silent/passive 32GB RAM, 1TB SATA-SSD J5005 Atoms, the next iteration most likely from 15Watt-TDP-NUCs, an i7-10700U with 64GB RAM, 1TB NVMe SSD in testing.

    Clusters need short-latency, high-bandwidth interconnects, Infiniband is a classic in data centers, but NUCs offer 1Gbit Ethernet pretty much exclusively, Intel struggling to do 2.5Gbit there while Thunderbolt and USB3/4 could do much better. Only they aren’t peer-to-peer and a TB 10Gbase-T adapter sets you back further than the NUC itself, while adding lots of latency and TCP/IP, while I want RDMA.

    So could we please pause for a moment and think on how we can build fabrics out of USB-X? Thunderbolt/USB4 is already about PCIe lanes, but most likely with multi-root excluded to maintain market segmentation and reduce validation effort.

    I hate how the industry keeps going to 90% of something really useful and then concentrating on 200% speed instead of creating real value.
  • repoman27 - Wednesday, October 7, 2020 - link

    Uh, Thunderbolt and USB4 are explicitly designed to support host-to-host communications already. OS / software support can be a limiting factor, but the hardware is built for it.

    Existing off-the-shelf solutions:
    https://www.dataonstorage.com/products-solutions/k...
    https://www.gosymply.com/symplyworkspace
    https://www.areca.com.tw/products/thunderbolt-8050...
    http://www.accusys.com.tw/T-Share/

    IP over Thunderbolt is also available:
    https://thunderbolttechnology.net/sites/default/fi...™%20Networking%20Bridging%20and%20Routing%20Instructional%20White%20Paper.pdf
    https://support.apple.com/guide/mac-help/ip-thunde...
  • repoman27 - Wednesday, October 7, 2020 - link

    Stupid Intel URL with ™ symbol. Let's try that again:
    https://thunderbolttechnology.net/sites/default/fi...
  • abufrejoval - Wednesday, October 7, 2020 - link

    Let me tell you: You just made my day! Or more likely one or two week-ends!

    Not being a Mac guy, I had completely ignored Thunderbolt for a long time and never learned that it supported networking natively. From the Intel docs it looks a bit similar to Mellanox VPI and host-chaining: I can use 100Gbit links there without any switch to link three machines in a kind of “token ring” manner for Ethernet (these are hybrid adapters that would also support Infiniband, but drivers support is only support Ethernet for host-chaining). Unfortunately the effective bandwidth is only around 35GByte/s for direct hops and slows to 16GByte/s once it has to pass through another host: Not as much of an upgrade over 10Gbase-T as you’d hope for: I never really got into testing latencies, which is where the Infiniband personality of those adapters should shine.

    And that’s where with TB I am hoping for significant improvements over Ethernet apart from native 40Gbit/s speed: Just right for Gluster storage!

    I also used to try to get Ethernet over fiber-channel working years ago, when they were throwing out 4Gbit adapters in the data center, but even if it was specified as a standard, Ethernet over fiber never got driver support and at the higher speeds the trend went the other direction.

    So I’ll definitely try to make the direct connection over TB3 work: CentOS8 should have kernel support for TB networking and the best news is that it doesn’t have to wait for TB4, but should work with TB3, too.

    I’ve just seen what seemed like an incredibly cheap 4-way TB switch recommended by Anton Shilov on the TomsHardware side of this enterprise, which unfortunately is only on pre-order for now (https://eshop.macsales.com/shop/owc-thunderbolt-hu... but supposed to support TB networking. Since the NUCs are single port TB3 only, that should still do the trick and be upgradable to TB4 for just around $150… The 5Gbit USB3 Aquantia NIC wasn’t much cheaper and even the 2.5Gbit USB3 NIC are still around $40.

    Exciting, exciting all that: Thank you very much for those links!
  • abufrejoval - Wednesday, October 7, 2020 - link

    ...except... I don't think that "switch" will be supporting multiple masters, same as USB.

    If it did, Intel would have shot themselves in the foot: 40Gbit networking on NUCs and laptops with little more than passive ables, that's almost as bad as adding a system management mode on 80486L and finding that it can be abused to implement a hypervisor (Mendel and Diane started VMware with that trick).

    Yet that's exactly what consumers really thirst for.

Log in

Don't have an account? Sign up now