AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

On the Heavy test, the average data rates of the 512GB Samsung PM981 again lag slightly behind most MLC-based NVMe drives but are clearly ahead of the competitors' TLC drives. The 1TB PM981 is behaving a bit oddly with slower than expected performance after a secure erase, but great performance when filled.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The average latency of the 1TB PM981 is a significant improvement over the 1TB 960 EVO, while the 512GB PM981 doesn't stand out from the other 512GB drives. The 99th percentile latencies aren't particularly good, and the 512GB PM981 scores worse than almost all the other PCIe SSDs of that size.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average write latency of the 1TB PM981 is excellent especially when the test is run on an empty drive. Average read latencies for both drives are decent but aren't a big improvement over their predecessors.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read latencies are one of the few ATSB scores where the TLC-based nature of the PM981 shines through. Many MLC-based SSDs are much better at keeping read latency under control, and the TLC-based Toshiba XG5 also scores much better than the PM981 here. The 99th percentile write latency of the 1TB PM981 is pretty good, following suit to the average write latency, while the 512GB model could use some improvement.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
Comments Locked

53 Comments

View All Comments

  • mapesdhs - Thursday, November 30, 2017 - link

    And Drazick, what do you mean by 2.5" drives? If you're referring to SATA, well then no, it's already at its limit of 550MB/sec, and producing something akin to SATA4 would be pointless when it's also hobbled by the old AHCI protocol.

    Also, "don't like" is an emotional response; what's your evidence and argument that they're a bad product somehow? Have you used them?
  • WithoutWeakness - Thursday, November 30, 2017 - link

    By 2.5" drives I'm sure he means the same form factor as standard SATA 2.5" SSDs except using a newer, faster connection just like the U.2 connectors that Dan mentioned. We definitely hit the limit of what SATA 3 can deliver and it would be nice to have a new standard that can leverage PCIe NVMe SSDs in a form factor that allows us to use cables to put drives elsewhere in a case for better layouts and airflow. U.2 was supposed to be that connector but there are basically no drives that support the standard and very few boards with more than 1 U.2 port. There are a few adapters on the market that allow you to install an M.2 drive into a 2.5" enclosure with a U.2 connector on it but until motherboards have more than 1 U.2 port it won't be a real replacement for the ubiquity of SATA.
  • msabercr - Friday, December 1, 2017 - link

    Actually there are m.2 to U.2 connectors readily available from most MB vendors, and 7mm U.2 datacenter drives are starting to become a thing. See Intel SSD DC P4501. I wouldn't be surprised if AIC disappears after too long. Limiting the power draw would be the major hurdle in creating such drives but it's not impossible. The EDSFF is going to pave the way for many high density compact form factors for NVMe moving forward.
  • sleeplessclassics - Thursday, November 30, 2017 - link

    One more thing which I think will be different when these drives are launched as retail devices is the driver support for Phoenix controller. While, it is always difficult to pinpoint the exact bottlenecks on such bleeding edge technology, I think a driver that is better optimized for Phoenix controller will definitely produce better results (ceteris paribus)

    Also, there have been rumors of QLC-Nand. If that is true, that could be the differentiator between EVO and PRO series.
  • romrunning - Thursday, November 30, 2017 - link

    Yes - QLC... more latency, lower endurance, slower writes - what's not to like? :-S
  • Spunjji - Thursday, November 30, 2017 - link

    Lower price..? Higher densities and increased production? That's what it's all about.

    If 3D QLC performs like 2D TLC then it'll do just fine for mass storage.
  • mapesdhs - Thursday, November 30, 2017 - link

    Good point given the way in which most products seem to be abe to tolerate far more writes than for which they're officially rated, in which case it's likely most users will want something newer long before a QLC product's endurance has been reached. If one is doing something that will drain the endurance a lot faster, then one should be using something more suitable anyway.
  • romrunning - Thursday, November 30, 2017 - link

    Sure, but QLC is just like TLC - once you force it on enough people and you say it's "good enough", then the higher-performing but costlier flash (like SLC/MLC) slowly is removed from the product portfolio. I'm not in favor of these race-to-the-bottom "advances", just to reduce the price a bit for hte consumer but more for the mfg. You may get a slight bump in capacity, but for me, the performance/endurance trade-off with a slight reduction in price isn't worth it.

    Now, I suppose it doesn't matter anymore to me since I'll still be buying the 960 Pro until the Optane 900p reaches better pricing. But the slippery slope you encounter is that new product "advances" are usually better when comapred to to the "current" state of tech. If the current standard is QLC, then the new "improvement" might only be raising it to levels that SLC/MLC were at previously. So the possibility is that it may not be that much of an improvement.
  • bcronce - Thursday, November 30, 2017 - link

    For read heavy mass storage drives, slower writes is fine. SSDs are getting fast enough that the IO or CPU is the bottleneck. Higher read latency for small queues will hurt performance, but not by a whole lot.

    The endurance is only an issue if you re-write your data a lot, like a paging file or a game drive that sees a lot of updates. A relatively static mass-media drive will probably be just fine.
  • sleeplessclassics - Thursday, November 30, 2017 - link

    Latency can (till some extent) be handled with a bigger dram buffer. Also, controllers are the key here and not the NAND type. Today, even TLC can perform better than MLC/SLC just 2-3 generations ago due to better controllers.

    A couple of years ago and even last year, 500GB ssd was around $80. If the prices were sane, 64-layer 3D TLC would have been below $50 for sure.
    And 96-layer QLC can give real competition to the HDDs.

    As for lower endurance, that can be handled by slightly higher provisioning and slower writes....well they would be okay for 95% of the mainstream users.
    Enthusiasts have optane and Z-NAND

Log in

Don't have an account? Sign up now