Sequential Read Performance

The structure of this test is the same as the random read test, except that the reads performed are 128kB and are arranged sequentially. This test covers queue depths from 1 to 512, using from one to eight worker threads each with queue depths up to 64. Each worker thread is reading from a different section of the drive. Each queue depth is tested for four minutes, and the performance average excludes the first minute. The queue depths are tested back to back with no idle time. The individual read operations are 128kB, and cover the span of the drive or array. Prior to running this test, the drives were preconditioned by writing to the entire drive sequentially, twice over.

Sustained 128kB Sequential Read

The sequential read performance results are pretty much as expected. The 2TB and 8TB drives have the same peak throughput. The two-drive RAID-0 is almost as fast as the four-drive array configurations that were working with a PCIe x8 bottleneck, and with that bottleneck removed, performance of the four-drive RAID-0 and RAID-10 increases by 80%.

All but the RAID-5 configuration show a substantial drop in throughput from QD1 to QD2 as competition between threads is introduced, but performance quickly recovers.  The individual drives reach full speed at QD16 (eight threads each at QD2). Unsurprisingly, the two-drive configuration saturates at QD32 and the four-drive arrays saturate at QD64.

Sequential Write Performance

The structure of this test is the same as the sequential read test. This test covers queue depths from 1 to 512, using from one to eight worker threads each with queue depths up to 64. Each worker thread is writing to a different section of the drive. Each queue depth is tested for four minutes, and the performance average excludes the first minute. The queue depths are tested back to back with no idle time. The individual write operations are 128kB, and cover the span of the drive or array. This test was run immediately after the sequential read test, so the drives had been preconditioned with sequential writes.

Sustained 128kB Sequential Write

The 8TB P4510 delivers far higher sequential write throughput than the 2TB model. The four-drive RAID-10 configuration requires more than PCIe x8 to beat the 8TB drive. The four-drive RAID-0 is about 3.6 times faster than a single 2TB drive, but only 2.4 times faster than the equivalent capacity 8TB drive.

The sequential write throughput of most configurations saturates with queue depths of just 2-4. The 8TB drive takes a bit longer to reach full speed around QD8. The performance of a four-drive array scales up more slowly when it is subject to a PCIe bottleneck, even before it has reached that upper limit.

Random Performance Mixed Read/Write Performance
Comments Locked

21 Comments

View All Comments

  • ABB2u - Thursday, February 15, 2018 - link

    Is Intel VROC really software RAID? No question RAID is all about software. But, since this is running underneath an OS at the chip level why not call it Hardware RAID just like the RAID software running on an Avago RAID controller? In my experience, I have referred to software RAID as that implemented in the OS through LVM or Dsik Management, the filesystem like ZFS, or erasure encoding at a parallel block level. --- It is all about the difference in latency.
  • saratoga4 - Thursday, February 15, 2018 - link

    >Is Intel VROC really software RAID?

    Yes.

    > In my experience, I have referred to software RAID as that implemented in the OS

    That is what VROC is. Without the driver, you would just have independent disks.
  • Samus - Thursday, February 15, 2018 - link

    So this is basically just storagespaces?
  • tuxRoller - Friday, February 16, 2018 - link

    Storage Space is more similar to lvm & mdadm (pooling, placement & parity policies, hot spares, and a general storage management interface) while vroc lets the os deal with nvme device bring up & then offers pooling + parity without an hba.
  • HStewart - Thursday, February 15, 2018 - link

    I would think any raid system has software to drive - it maybe on say an ARM microcontroller - but it still has some kind of software to make it work.

    But I would doubt if you can take Intel's driver and make it work on another SSD. It probably has specific hardware enhancements to increase it's performance.
  • Nime - Thursday, March 21, 2019 - link

    If RAID controller uses the same CPU as OS it might be called soft. If the controller has its own processor to calculate disk data to read & write, it's a hard raid system.
  • saratoga4 - Thursday, February 15, 2018 - link

    I would be interested to see performance of normal software raid vs. VROC since for most applications I would prefer not to boot off of a high performance disk array. What, if any, benefit does it offer over more conventional software raid?
  • JamesAnthony - Thursday, February 15, 2018 - link

    I think the Raid 5 tests when you are done with them are going to be an important note in what the actual performance the platform is capable of.
  • boeush - Thursday, February 15, 2018 - link

    Maybe a stupid question, but - out of sheer curiosity - is there a limit, if any, on the number of VROC drives per array? For instance, could you use VROC to build a 10-drive RAID-5 array? (Is 4 drives the maximum - or if not, why wouldn't Intel supply more than 4 to you, for an ultimate showcase?)

    On a separate note - the notion of paying Intel extra $$$ just to enable functions you've already purchased (by virtue of them being embedded on the motherboard and the CPU) - I just can't get around it appearing as nothing but a giant ripoff. Doesn't seem like this would do much to build or maintain brand loyalty... And the notion of potentially paying less to enable VROC when restricted to Intel-only drives - reeks of exerting market dominance to suppress competition (i.e. sounds like an anti-trust lawsuit in the making...)
  • stanleyipkiss - Thursday, February 15, 2018 - link

    The maximum number of drives, as stated in the article, depends solely on the number of PCI-E lanes available. These being x4 NVME drives, the lanes dry up quickly.

Log in

Don't have an account? Sign up now