Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

The mixed random I/O performance of the Samsung PM981 is a big improvement over last generation's 960 EVO. The 1TB PM981 beats out even the MLC-based 960 PRO, while the smaller 512GB PM981 is a bit slower than the 960 PRO of the same size.

As the proportion of writes in the mixed workload increases, the PM981 steadily gains performance, pulling further and further ahead of the 960 EVO. The 512GB PM981's main weakness is that its performance doesn't hit quit as high a peak during the final phases of the test when the workload is almost entirely random writes.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

The 512GB PM981 matches the mixed sequential performance of the MLC-based 512GB 960 PRO, while the 1TB PM981 is substantially faster than the 960 PRO or any other flash-based SSD.

The Samsung 960 PRO 1TB outperforms the 1TB PM981 during the early read-heavy phases of the mixed sequential test, but then its performance drops off precipitously while the PM981 retains its performance until later in the test. The 512GB PM981 averages almost exactly the same performance as the 512GB 960 PRO, but with substantial differences in the details: the 960 PRO is faster at either end of the test, but the PM981 has a significant advantage for more even mixes of reads and writes.

Sequential Performance Conclusions
Comments Locked

53 Comments

View All Comments

  • mapesdhs - Thursday, November 30, 2017 - link

    And Drazick, what do you mean by 2.5" drives? If you're referring to SATA, well then no, it's already at its limit of 550MB/sec, and producing something akin to SATA4 would be pointless when it's also hobbled by the old AHCI protocol.

    Also, "don't like" is an emotional response; what's your evidence and argument that they're a bad product somehow? Have you used them?
  • WithoutWeakness - Thursday, November 30, 2017 - link

    By 2.5" drives I'm sure he means the same form factor as standard SATA 2.5" SSDs except using a newer, faster connection just like the U.2 connectors that Dan mentioned. We definitely hit the limit of what SATA 3 can deliver and it would be nice to have a new standard that can leverage PCIe NVMe SSDs in a form factor that allows us to use cables to put drives elsewhere in a case for better layouts and airflow. U.2 was supposed to be that connector but there are basically no drives that support the standard and very few boards with more than 1 U.2 port. There are a few adapters on the market that allow you to install an M.2 drive into a 2.5" enclosure with a U.2 connector on it but until motherboards have more than 1 U.2 port it won't be a real replacement for the ubiquity of SATA.
  • msabercr - Friday, December 1, 2017 - link

    Actually there are m.2 to U.2 connectors readily available from most MB vendors, and 7mm U.2 datacenter drives are starting to become a thing. See Intel SSD DC P4501. I wouldn't be surprised if AIC disappears after too long. Limiting the power draw would be the major hurdle in creating such drives but it's not impossible. The EDSFF is going to pave the way for many high density compact form factors for NVMe moving forward.
  • sleeplessclassics - Thursday, November 30, 2017 - link

    One more thing which I think will be different when these drives are launched as retail devices is the driver support for Phoenix controller. While, it is always difficult to pinpoint the exact bottlenecks on such bleeding edge technology, I think a driver that is better optimized for Phoenix controller will definitely produce better results (ceteris paribus)

    Also, there have been rumors of QLC-Nand. If that is true, that could be the differentiator between EVO and PRO series.
  • romrunning - Thursday, November 30, 2017 - link

    Yes - QLC... more latency, lower endurance, slower writes - what's not to like? :-S
  • Spunjji - Thursday, November 30, 2017 - link

    Lower price..? Higher densities and increased production? That's what it's all about.

    If 3D QLC performs like 2D TLC then it'll do just fine for mass storage.
  • mapesdhs - Thursday, November 30, 2017 - link

    Good point given the way in which most products seem to be abe to tolerate far more writes than for which they're officially rated, in which case it's likely most users will want something newer long before a QLC product's endurance has been reached. If one is doing something that will drain the endurance a lot faster, then one should be using something more suitable anyway.
  • romrunning - Thursday, November 30, 2017 - link

    Sure, but QLC is just like TLC - once you force it on enough people and you say it's "good enough", then the higher-performing but costlier flash (like SLC/MLC) slowly is removed from the product portfolio. I'm not in favor of these race-to-the-bottom "advances", just to reduce the price a bit for hte consumer but more for the mfg. You may get a slight bump in capacity, but for me, the performance/endurance trade-off with a slight reduction in price isn't worth it.

    Now, I suppose it doesn't matter anymore to me since I'll still be buying the 960 Pro until the Optane 900p reaches better pricing. But the slippery slope you encounter is that new product "advances" are usually better when comapred to to the "current" state of tech. If the current standard is QLC, then the new "improvement" might only be raising it to levels that SLC/MLC were at previously. So the possibility is that it may not be that much of an improvement.
  • bcronce - Thursday, November 30, 2017 - link

    For read heavy mass storage drives, slower writes is fine. SSDs are getting fast enough that the IO or CPU is the bottleneck. Higher read latency for small queues will hurt performance, but not by a whole lot.

    The endurance is only an issue if you re-write your data a lot, like a paging file or a game drive that sees a lot of updates. A relatively static mass-media drive will probably be just fine.
  • sleeplessclassics - Thursday, November 30, 2017 - link

    Latency can (till some extent) be handled with a bigger dram buffer. Also, controllers are the key here and not the NAND type. Today, even TLC can perform better than MLC/SLC just 2-3 generations ago due to better controllers.

    A couple of years ago and even last year, 500GB ssd was around $80. If the prices were sane, 64-layer 3D TLC would have been below $50 for sure.
    And 96-layer QLC can give real competition to the HDDs.

    As for lower endurance, that can be handled by slightly higher provisioning and slower writes....well they would be okay for 95% of the mainstream users.
    Enthusiasts have optane and Z-NAND

Log in

Don't have an account? Sign up now