Peak Throughput And Steady State

For client/consumer SSDs we primarily focus on low queue depth performance for its relevance to interactive workloads. Server workloads are often intense enough to keep a pile of drives busy, to the maximum attainable throughput of enterprise SSDs is actually important. But it usually isn't a good idea to focus solely on throughput while ignoring latency, because somewhere down the line there's always an end user waiting for the server to respond.

In order to characterize the maximum throughput an SSD can reach, we need to test at a range of queue depths. Different drives will reach their full speed at different queue depths, and increasing the queue depth beyond that saturation point may be slightly detrimental to performance, and will drastically and unnecessarily increase latency. SATA drives can only have 32 pending commands in their queue, and any attempt to benchmark at higher queue depths will just result in commands sitting in the operating system's queues before being issued to the drive. On the other hand, some high-end NVMe SSDs need queue depths well beyond 32 to reach full speed.

Because of the above, we are not going to compare drives at a single fixed queue depth. Instead, each drive was tested at a range of queue depths up to the excessively high QD 512. For each drive, the queue depth with the highest performance was identified. Rather than report that value, we're reporting the throughput, latency, and power efficiency for the lowest queue depth that provides at least 95% of the highest obtainable performance. This often yields much more reasonable latency numbers, and is representative of how a reasonable operating system's IO scheduler should behave. (Our tests have to be run with any such scheduler disabled, or we would not get the queue depths we ask for.)

One extra complication is the choice of how to generate a specified queue depth with software. A single thread can issue multiple I/O requests using asynchronous APIs, but this runs into at least one of two problems: if each system call issues one read or write command, then context switch overhead becomes the bottleneck long before a high-end NVMe SSD's abilities are fully taxed. Alternatively, if many operations are batched together for each system call, then the real queue depth will vary significantly and it is harder to get an accurate picture of drive latency.

Using multiple threads to perform IO gets around the limits of single-core software overhead, and brings an extra advantage for NVMe SSDs: the use of multiple queues per drive. The NVMe drives in this review all support 32 separate IO queues, so we can have 32 threads on separate cores independently issuing IO without any need for synchronization or locking between threads. For even higher queue depths, we could use a combination of techniques: one thread per drive queue, issuing multiple IOs with asynchronous APIs. But this is getting into the realm of micro-optimization that most applications will never be properly tuned for, so instead the highest queue depths in these tests are still generated by having N threads issuing synchronous requests one at a time, and it's up to the OS to handle the rest.

Peak Random Read Performance

4kB Random Read

The SATA drives all have no trouble more or less saturating their host interface; they have plenty of flash that could service more read requests if they could actually be delivered to the drive quickly enough. Among NVMe drives, we see some dependence on capacity, with the 960GB Samsung 983 DCT falling well short of the 1.92TB model. The rest of the NVMe drives make it past half a million IOPS before software overhead on the host system becomes a bottleneck, so we don't even get close to seeing the PBlaze5 hit its rated 1M IOPS.

4kB Random Read (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

The Samsung 983 DCT offers the best power efficiency on this random read test, because the drives with bigger, more power-hungry controllers weren't able to show off their full abilities without hitting bottlenecks elsewhere in the system. The SATA drives offer respectable power efficiency as well, since they are only drawing about 2W to saturate the SATA link.

4kB Random Read QoS

The 2TB P4510 and both PBlaze5 drives have consistency issues at the 99.99th percentile level, but are fine at the more relaxed 99th percentile threshold. The Optane SSD's latency scores are an order of magnitude better than any of the other NVMe SSDs, and it was the Optane SSD that delivered the highest overall throughput.

Peak Sequential Read Performance

Since this test consists of many threads each performing IO sequentially but without coordination between threads, there's more work for the SSD controller and less opportunity for pre-fetching than there would be with a single thread reading sequentially across the whole drive. The workload as tested bears closer resemblance to a file server streaming to several simultaneous users, rather than resembling a full-disk backup image creation.

128kB Sequential Read

The Intel drives don't quite match the performance of the Samsung 983 DCT or the slower PBlaze5. The Optane SSD ends up being the slowest NVMe drive on this test, but it's actually slightly faster than its spec sheet indicates. The Optane SSD's 3D XPoint memory has very low latency, but that doesn't change the fact that the drive's controller only has seven channels to work with. The PBlaze5s are the two fastest drives on this test, but they're both performing significantly below expectations.

128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Samsung 983 DCT clearly has the lead for power efficiency, followed by the slightly slower and more power-hungry Intel P4510. The current-generation SATA drives from Samsung mostly stay below 2W and end up with decent efficiency scores despite the severe performance bottleneck they have to contend with.

Steady-State Random Write Performance

The hardest task for most enterprise SSDs is to cope with an unending stream of writes. Once all the spare area granted by the high overprovisioning ratios has been used up, the drive has to perform garbage collection while simultaneously continuing to service new write requests, and all while maintaining consistent performance. The next two tests show how the drives hold up after hours of non-stop writes to an already full drive.

4kB Random Write

The Samsung drives don't even come close to saturating their host interfaces, but they are performing according to spec for steady-state random writes, with higher-capacity models offering clearly better performance. The Intel and Memblaze drives have a huge advantage, with the slower P4510 maintaining twice the throughput that a 983 DCT can handle.

4kB Random Write (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

The Samsung 983 DCTs used about 1.5W more power to deliver only slightly higher speeds than the Samsung SATA drives, so the NVMe drives wind up with some of the worst power efficiency ratings. The Optane SSD with its wide performance lead more than makes up for its rather high power consumption. In second place for efficiency is the lowly Samsung 860 DCT; despite our best efforts, it continued deliver higher than spec performance results on this test, while drawing less power than the 883 DCT.

4kB Random Write

The random write throughput provided by the Samsung 983 DCT at steady-state is nothing special, but it delivers that performance with low latency and extremely good consistency that rivals the Optane SSD. The Intel P4510 and Memblaze PBlaze5 SSDs provide much higher throughput, but with tail latencies that extend into the millisecond range. Samsung's 883 DCT SATA drive also has decent latency behavior that is far better than the 860 DCT.

Steady-State Sequential Write Performance

 

128kB Sequential Write

The steady-state sequential write test mostly levels the playing field. Even the NVMe drives rated at or below 1 DWPD offer largely SATA-like write throughput, and only the generously overprovisioned PBlaze5 can keep pace with the Optane SSD.

128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The PBlaze5 requires over 20W to keep up with what the Optane SSD can deliver at 14W, so despite its high performance the PBlaze5's efficiency is no better than the other NVMe drives. It's the SATA drives that come out well ahead: even though this workload pushes their power consumption relatively high, Samsung's latest generation of SATA drives is still able to keep it under 3W, and that's enough for a clear efficiency win.

Performance at Queue Depth 1 Mixed I/O & NoSQL Database Performance
Comments Locked

36 Comments

View All Comments

  • FunBunny2 - Thursday, January 3, 2019 - link

    "The rack is currently installed in an unheated attic and it's the middle of winter, so this setup provided a reasonable approximation of a well-cooled datacenter."

    well... I don't know where your attic is, but mine is in New England, and the temperature hasn't been above freezing for an entire day for some time. what's the standard ambient for a datacenter?
  • Ryan Smith - Thursday, January 3, 2019 - link

    It is thankfully much warmer in North Carolina.=)
  • Billy Tallis - Thursday, January 3, 2019 - link

    I"m in North Carolina, so the attic never gets anywhere close to freezing, but it was well below normal room temperature during most of this testing. Datacenters aren't necessarily chilled that low unless they're in cold climates or are adjacent to a river full of cold water, but servers in a datacenter also tend to have their fans set to run much louder than I want in my home office.

    The Intel server used for this testing is rated for continuous operation at 35ºC ambient. It's rated for short term operation at higher temperatures (40ºC for 900 hours per year, 45ºC for 90 hours per year) with some performance impact but no harm to reliability. In practice, by the time the air intake temperature gets up to 35ºC, it's painfully loud.
  • Jezzah88 - Friday, January 4, 2019 - link

    16-19 depending on size
  • drajitshnew - Thursday, January 3, 2019 - link

    It enough information available for you to at least make a pipeline post clarifies the differences between Z-Nand (Samsung) and traditional MLC/SLC flash
  • Billy Tallis - Thursday, January 3, 2019 - link

    I should have a review up of the Samsung 983 ZET Z-SSD next month. I'll include all the information we have about how Z-NAND differs from conventional planar and 3D SLC. Samsung did finally share some real numbers at ISSCC2018, and it looks like the biggest difference enabling lower latency is much smaller page sizes.
  • MrCommunistGen - Thursday, January 3, 2019 - link

    Very much looking forward to the review!
  • Greg100 - Thursday, January 3, 2019 - link

    It's a pity that we don't have consumer drives that are fast and at the same time have large enough capacity - 8TB. I would like to have a consumer U.2 drive that has 8TB capacity.

    What we have now… only 4TB Samsung and… SATA :(

    Will Intel DC P4510 8TB be compatible with Z390 motherboard, Intel Core i9-9900K and Windows 10 Pro? Connection via U.2 to M.2 cable (Intel J15713-001). Of course the M.2 port on the motherboard will be compatible with NVMe and PCI-E 3.0 x4.

    I know that compatibility should be checked on the motherboard manufacturer's website, but nobody has checked Intel DC P4510 drives and nobody will, because everyone assumes that the consumer does not need 8TB SSDs.

    Anandtech should also do tests these drives on consumer motherboards. Am I the only one who would like to use Intel DC P4510 8TB with Intel Z390, Intel Core i9-9900K and Windows 10 Pro? Is it possible? Will there be any compatibility problems?
  • Billy Tallis - Thursday, January 3, 2019 - link

    I don't currently have the necessary adapter cables to connect a U.2 drive to our consumer testbed, but I will run the M.2 983 DCT through the consumer test suite at some point. I have plenty of consumer drives to be testing this month, though.

    Generally, I don't expect enterprise TLC drives to be that great for consumer workloads, due to the lack of SLC caching. And they'll definitely lose out on power efficiency when testing them at low queue depths. There shouldn't be any compatibility issues using enterprise drives on consumer systems, though. There's no need for separate NVMe drivers or anything like that. Some enterprise NVMe drives do add a lot to boot times.
  • Greg100 - Thursday, January 3, 2019 - link

    Thank you :-) So I will try that configuration.

    Maybe Intel DC P4510 8TB will not be the boot champion or power efficiency drive at low queue depths, but having 8TB data on a single drive with fast sequential access have huge benefits for me.

    Do you think it is worth waiting for 20TB Intel QLC or 8TB+ client drives? Any rumors?

Log in

Don't have an account? Sign up now