QD1 Random Read Performance

Drive throughput with a queue depth of one is usually not advertised, but almost every latency or consistency metric reported on a spec sheet is measured at QD1 and usually for 4kB transfers. When the drive only has one command to work on at a time, there's nothing to get in the way of it offering its best-case access latency. Performance at such light loads is absolutely not what most of these drives are made for, but they have to make through the easy tests before we move on to the more realistic challenges.

*The colors on the graphs have no specific meaning; just to make it easier to read from the graph based on drive family

4kB Random Read QD1

The Intel DC P4510 and Samsung 983 DCT offer identical QD1 random read performance, showing that Intel/Micron 3D NAND has caught up to Samsung after a first generation that was clearly slower (shown here by the Memblaze PBlaze5). The Samsung SATA drives are about 40% slower than their NVMe drives, and the Optane SSD is almost ten times faster than anything else.

 

4kB Random Read QD1 (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

QD1 isn't pushing the drives very hard, so they're not very far above idle power draw. That means the bigger, beefier drives draw more power but have little to show for it. The SATA drives all have higher efficiency scores than the NVMe drives, except that the Optane SSD provides more than 2.5x the performance per Watt at QD1.

4kB Random Read QD1 QoS

All of these drives have pretty good consistency for QD1 random reads. The 8TB P4510 has a 99.99th percentile latency that's just over 2.5 times its average, and all the other flash-based SSDs are better than that. (The PBlaze5 is a bit slower than the P4510, but consistently). The Optane SSD actually has the biggest relative disparity between average latency and 99.99th percentile, but that hardly matters with its worst-case is as good or better than the average-case performance of flash memory.

The random read performance of most of these drives is optimized for 4kB block sizes: smaller block sizes perform roughly the same in terms of IOPS, and thus get much lower throughput. Throughput stays relatively low until block sizes exceed 64kB, after which the NVMe drives all start to deliver much higher throughput. This is most pronounced for the Samsung 983 DCT.

QD1 Random Write Performance

4kB Random Write QD1

The clearest trend for 4kB random write performance at QD1 is that the NVMe drives are at least twice as fast as the SATA drives, but there's significant variation: Intel's NVMe drives have much faster QD1 random write performance than Samsung's, and the Memblaze drives are in between.

4kB Random Write QD1 (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

The SATA drives draw much less power than the NVMe drives during this random write test, so the efficiency scores end up in the same ballpark with no clear winner. The PBlaze5 is the clear loser, because its huge power-hungry controller is of no use here at low loads.

4kB Random Write QD1 QoS

The Intel DC P4510 has some QoS issues, with 99th and 99.99th percentile random write latencies that are far higher than any of the other NVMe drives, despite the P4510 having one of the best average latency scores. Samsung's latest generation of SATA drives doesn't offer a significant improvement to average-case latency, but the tail latency has clearly improved.

As with random reads, we find that the random write performance of these drives always provides peak IOPS when dealing with 4kB transfers. The Memblaze PBlaze5 drives do extremely poorly with writes smaller than 4kB, but the rest of the drives handle tiny writes with roughly the same IOPS as a 4kB write. 8kB and larger random writes always yield fewer IOPS but usually significantly higher overall throughput.

QD1 Sequential Read Performance

128kB Sequential Read QD1

At QD1, the SATA drives aren't quite saturating the host interface, but mainly because of the link idle time between the drive finishing one transfer and receiving the command to start the next. The PBlaze5 SSDs are only a bit faster than the SATA drives at QD1 despite the C900 being the drive with the most host bandwidth; reminding us how the first-generation IMFT 3D TLC could be quite slow at times. The Intel drives are a bit slower than the Samsung drives, coming in at or below 2GB/s while the 983 DCT U.2 hits 2.5GB and the M.2 is around 2.2GB/s.

128kB Sequential Read QD1 (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The latest Samsung SATA drives are comparable in efficiency to Intel's NVMe, while Samsung's own NVMe SSD is substantially more efficient. The Memblaze PBlaze5 is by far the least efficient, since it offers disappointing QD1 sequential read performance while drawing more power than all the other flash-based SSDs.

The Memblaze PBlaze5 doesn't seem to be any good at prefetching or caching when performing sequential reads: its throughput is very low for small to medium block sizes, and even at 128kB it is much slower than with 1MB transfers. The rest of the drives generally provide full sequential read throughput for transfer sizes starting around 64kB or 128kB.

QD1 Sequential Write Performance

128kB Sequential Write QD1

At QD1, the Intel P4510 and Samsung 983 DCT are only slightly faster at sequential writes than the SATA drives. The Optane SSD and the Memblaze PBlaze5 C900 both perform very well, while the PBlaze5 D900 can't quite hit 1GB/s at QD1.

128kB Sequential Write QD1 (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Once again the SATA drives dominate the power efficiency rankings; Samsung's current-generation SATA drives draw less than half the power of the most efficient flash-based NVMe drive. The older Samsung PM863 shows that Samsung's SATA drives have improved significantly even though the performance is barely changed from earlier generations. Among NVMe drives, power consumption scales roughly as expected, with the Memblaze PBlaze5 C900 drawing over 22W to deliver 1.6GB/s sustained writes.

As with random writes, for sequential writes the Memblaze PBlaze5 doesn't like being handed writes of less than 4kB, and neither does the Intel Optane P4800X. The rest of the drives generally hit their steady-state sequential write speed starting with transfer sizes in the 8kB to 32kB range.

Drive Features Peak Throughput And Steady State
Comments Locked

36 Comments

View All Comments

  • FunBunny2 - Thursday, January 3, 2019 - link

    "The rack is currently installed in an unheated attic and it's the middle of winter, so this setup provided a reasonable approximation of a well-cooled datacenter."

    well... I don't know where your attic is, but mine is in New England, and the temperature hasn't been above freezing for an entire day for some time. what's the standard ambient for a datacenter?
  • Ryan Smith - Thursday, January 3, 2019 - link

    It is thankfully much warmer in North Carolina.=)
  • Billy Tallis - Thursday, January 3, 2019 - link

    I"m in North Carolina, so the attic never gets anywhere close to freezing, but it was well below normal room temperature during most of this testing. Datacenters aren't necessarily chilled that low unless they're in cold climates or are adjacent to a river full of cold water, but servers in a datacenter also tend to have their fans set to run much louder than I want in my home office.

    The Intel server used for this testing is rated for continuous operation at 35ºC ambient. It's rated for short term operation at higher temperatures (40ºC for 900 hours per year, 45ºC for 90 hours per year) with some performance impact but no harm to reliability. In practice, by the time the air intake temperature gets up to 35ºC, it's painfully loud.
  • Jezzah88 - Friday, January 4, 2019 - link

    16-19 depending on size
  • drajitshnew - Thursday, January 3, 2019 - link

    It enough information available for you to at least make a pipeline post clarifies the differences between Z-Nand (Samsung) and traditional MLC/SLC flash
  • Billy Tallis - Thursday, January 3, 2019 - link

    I should have a review up of the Samsung 983 ZET Z-SSD next month. I'll include all the information we have about how Z-NAND differs from conventional planar and 3D SLC. Samsung did finally share some real numbers at ISSCC2018, and it looks like the biggest difference enabling lower latency is much smaller page sizes.
  • MrCommunistGen - Thursday, January 3, 2019 - link

    Very much looking forward to the review!
  • Greg100 - Thursday, January 3, 2019 - link

    It's a pity that we don't have consumer drives that are fast and at the same time have large enough capacity - 8TB. I would like to have a consumer U.2 drive that has 8TB capacity.

    What we have now… only 4TB Samsung and… SATA :(

    Will Intel DC P4510 8TB be compatible with Z390 motherboard, Intel Core i9-9900K and Windows 10 Pro? Connection via U.2 to M.2 cable (Intel J15713-001). Of course the M.2 port on the motherboard will be compatible with NVMe and PCI-E 3.0 x4.

    I know that compatibility should be checked on the motherboard manufacturer's website, but nobody has checked Intel DC P4510 drives and nobody will, because everyone assumes that the consumer does not need 8TB SSDs.

    Anandtech should also do tests these drives on consumer motherboards. Am I the only one who would like to use Intel DC P4510 8TB with Intel Z390, Intel Core i9-9900K and Windows 10 Pro? Is it possible? Will there be any compatibility problems?
  • Billy Tallis - Thursday, January 3, 2019 - link

    I don't currently have the necessary adapter cables to connect a U.2 drive to our consumer testbed, but I will run the M.2 983 DCT through the consumer test suite at some point. I have plenty of consumer drives to be testing this month, though.

    Generally, I don't expect enterprise TLC drives to be that great for consumer workloads, due to the lack of SLC caching. And they'll definitely lose out on power efficiency when testing them at low queue depths. There shouldn't be any compatibility issues using enterprise drives on consumer systems, though. There's no need for separate NVMe drivers or anything like that. Some enterprise NVMe drives do add a lot to boot times.
  • Greg100 - Thursday, January 3, 2019 - link

    Thank you :-) So I will try that configuration.

    Maybe Intel DC P4510 8TB will not be the boot champion or power efficiency drive at low queue depths, but having 8TB data on a single drive with fast sequential access have huge benefits for me.

    Do you think it is worth waiting for 20TB Intel QLC or 8TB+ client drives? Any rumors?

Log in

Don't have an account? Sign up now